2026-03-08T22:52:14.696 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-08T22:52:14.700 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-08T22:52:14.744 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289 branch: squid description: orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} email: null first_in_suite: false flavor: default job_id: '289' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 4 size: 10 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 ms bind msgr1: false ms bind msgr2: true ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 osd shutdown pgref assert: true flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_mode: root install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mon.c - mgr.y - osd.0 - osd.1 - osd.2 - osd.3 - client.0 - ceph.rgw.foo.a - node-exporter.a - alertmanager.a - - mon.b - mgr.x - osd.4 - osd.5 - osd.6 - osd.7 - client.1 - prometheus.a - grafana.a - node-exporter.b - ceph.iscsi.iscsi.a seed: 8017 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch:cephadm suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMiGgpod03Dg/wByM99jZIGBY3wLrvx2wOvV0swZJAH9NPax1CPtnxj8XBKCkx6ct65vj+VtsHxomdYvsyjJCIA= vm11.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQiwQGV+IBhNIa6AoL+6PNEuAIn8sIcUl3zS55/F+6EJchrFo/dbNhtnOR05pezErx7VZhk4k7mGUH1m0Eh+cA= tasks: - install: null - cephadm: conf: mgr: debug mgr: 20 debug ms: 1 - cephadm.shell: mon.a: - "set -ex\nfor f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x\n\ do\n echo \"rotating key for $f\"\n K=$(ceph auth get-key $f)\n NK=\"\ $K\"\n ceph orch daemon rotate-key $f\n while [ \"$K\" == \"$NK\" ]; do\n\ \ sleep 5\n NK=$(ceph auth get-key $f)\n done\ndone\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-08_22:22:45 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-08T22:52:14.744 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-08T22:52:14.745 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-08T22:52:14.745 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-08T22:52:14.745 INFO:teuthology.task.internal:Checking packages... 2026-03-08T22:52:14.745 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-08T22:52:14.745 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-08T22:52:14.745 INFO:teuthology.packaging:ref: None 2026-03-08T22:52:14.745 INFO:teuthology.packaging:tag: None 2026-03-08T22:52:14.745 INFO:teuthology.packaging:branch: squid 2026-03-08T22:52:14.745 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:52:14.745 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-08T22:52:15.420 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:52:15.421 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-08T22:52:15.421 INFO:teuthology.task.internal:no buildpackages task found 2026-03-08T22:52:15.421 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-08T22:52:15.422 INFO:teuthology.task.internal:Saving configuration 2026-03-08T22:52:15.426 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-08T22:52:15.427 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-08T22:52:15.438 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-08 22:51:11.572283', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMiGgpod03Dg/wByM99jZIGBY3wLrvx2wOvV0swZJAH9NPax1CPtnxj8XBKCkx6ct65vj+VtsHxomdYvsyjJCIA='} 2026-03-08T22:52:15.443 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm11.local', 'description': '/archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-08 22:51:11.572677', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:0b', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJQiwQGV+IBhNIa6AoL+6PNEuAIn8sIcUl3zS55/F+6EJchrFo/dbNhtnOR05pezErx7VZhk4k7mGUH1m0Eh+cA='} 2026-03-08T22:52:15.443 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-08T22:52:15.444 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['mon.a', 'mon.c', 'mgr.y', 'osd.0', 'osd.1', 'osd.2', 'osd.3', 'client.0', 'ceph.rgw.foo.a', 'node-exporter.a', 'alertmanager.a'] 2026-03-08T22:52:15.444 INFO:teuthology.task.internal:roles: ubuntu@vm11.local - ['mon.b', 'mgr.x', 'osd.4', 'osd.5', 'osd.6', 'osd.7', 'client.1', 'prometheus.a', 'grafana.a', 'node-exporter.b', 'ceph.iscsi.iscsi.a'] 2026-03-08T22:52:15.444 INFO:teuthology.run_tasks:Running task console_log... 2026-03-08T22:52:15.449 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-08T22:52:15.453 DEBUG:teuthology.task.console_log:vm11 does not support IPMI; excluding 2026-03-08T22:52:15.453 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f1ec47d8ca0>, signals=[15]) 2026-03-08T22:52:15.453 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-08T22:52:15.454 INFO:teuthology.task.internal:Opening connections... 2026-03-08T22:52:15.454 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-08T22:52:15.455 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T22:52:15.511 DEBUG:teuthology.task.internal:connecting to ubuntu@vm11.local 2026-03-08T22:52:15.512 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm11.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T22:52:15.571 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-08T22:52:15.571 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-08T22:52:15.604 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-08T22:52:15.604 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:NAME="Ubuntu" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="22.04" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_CODENAME=jammy 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:ID=ubuntu 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE=debian 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-08T22:52:15.648 INFO:teuthology.orchestra.run.vm06.stdout:UBUNTU_CODENAME=jammy 2026-03-08T22:52:15.648 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-08T22:52:15.653 DEBUG:teuthology.orchestra.run.vm11:> uname -m 2026-03-08T22:52:15.657 INFO:teuthology.orchestra.run.vm11.stdout:x86_64 2026-03-08T22:52:15.657 DEBUG:teuthology.orchestra.run.vm11:> cat /etc/os-release 2026-03-08T22:52:15.701 INFO:teuthology.orchestra.run.vm11.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-08T22:52:15.701 INFO:teuthology.orchestra.run.vm11.stdout:NAME="Ubuntu" 2026-03-08T22:52:15.701 INFO:teuthology.orchestra.run.vm11.stdout:VERSION_ID="22.04" 2026-03-08T22:52:15.701 INFO:teuthology.orchestra.run.vm11.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-08T22:52:15.701 INFO:teuthology.orchestra.run.vm11.stdout:VERSION_CODENAME=jammy 2026-03-08T22:52:15.701 INFO:teuthology.orchestra.run.vm11.stdout:ID=ubuntu 2026-03-08T22:52:15.702 INFO:teuthology.orchestra.run.vm11.stdout:ID_LIKE=debian 2026-03-08T22:52:15.702 INFO:teuthology.orchestra.run.vm11.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-08T22:52:15.702 INFO:teuthology.orchestra.run.vm11.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-08T22:52:15.702 INFO:teuthology.orchestra.run.vm11.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-08T22:52:15.702 INFO:teuthology.orchestra.run.vm11.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-08T22:52:15.702 INFO:teuthology.orchestra.run.vm11.stdout:UBUNTU_CODENAME=jammy 2026-03-08T22:52:15.702 INFO:teuthology.lock.ops:Updating vm11.local on lock server 2026-03-08T22:52:15.706 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-08T22:52:15.708 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-08T22:52:15.709 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-08T22:52:15.709 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-08T22:52:15.710 DEBUG:teuthology.orchestra.run.vm11:> test '!' -e /home/ubuntu/cephtest 2026-03-08T22:52:15.745 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-08T22:52:15.746 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-08T22:52:15.746 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-08T22:52:15.754 DEBUG:teuthology.orchestra.run.vm11:> test -z $(ls -A /var/lib/ceph) 2026-03-08T22:52:15.755 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-08T22:52:15.790 INFO:teuthology.orchestra.run.vm11.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-08T22:52:15.790 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-08T22:52:15.799 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-08T22:52:15.802 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:52:16.222 DEBUG:teuthology.orchestra.run.vm11:> test -e /ceph-qa-ready 2026-03-08T22:52:16.225 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:52:16.466 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-08T22:52:16.467 INFO:teuthology.task.internal:Creating test directory... 2026-03-08T22:52:16.467 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-08T22:52:16.468 DEBUG:teuthology.orchestra.run.vm11:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-08T22:52:16.472 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-08T22:52:16.475 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-08T22:52:16.477 INFO:teuthology.task.internal:Creating archive directory... 2026-03-08T22:52:16.477 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-08T22:52:16.513 DEBUG:teuthology.orchestra.run.vm11:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-08T22:52:16.519 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-08T22:52:16.520 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-08T22:52:16.520 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-08T22:52:16.558 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:52:16.558 DEBUG:teuthology.orchestra.run.vm11:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-08T22:52:16.561 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:52:16.561 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-08T22:52:16.601 DEBUG:teuthology.orchestra.run.vm11:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-08T22:52:16.607 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:52:16.610 INFO:teuthology.orchestra.run.vm11.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:52:16.612 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:52:16.617 INFO:teuthology.orchestra.run.vm11.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:52:16.618 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-08T22:52:16.620 INFO:teuthology.task.internal:Configuring sudo... 2026-03-08T22:52:16.620 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-08T22:52:16.657 DEBUG:teuthology.orchestra.run.vm11:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-08T22:52:16.669 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-08T22:52:16.672 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-08T22:52:16.672 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-08T22:52:16.709 DEBUG:teuthology.orchestra.run.vm11:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-08T22:52:16.712 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T22:52:16.755 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T22:52:16.798 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:52:16.799 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-08T22:52:16.848 DEBUG:teuthology.orchestra.run.vm11:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T22:52:16.852 DEBUG:teuthology.orchestra.run.vm11:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T22:52:16.897 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:52:16.897 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-08T22:52:16.946 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-08T22:52:16.947 DEBUG:teuthology.orchestra.run.vm11:> sudo service rsyslog restart 2026-03-08T22:52:17.001 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-08T22:52:17.003 INFO:teuthology.task.internal:Starting timer... 2026-03-08T22:52:17.003 INFO:teuthology.run_tasks:Running task pcp... 2026-03-08T22:52:17.015 INFO:teuthology.run_tasks:Running task selinux... 2026-03-08T22:52:17.018 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-08T22:52:17.018 INFO:teuthology.task.selinux:Excluding vm11: VMs are not yet supported 2026-03-08T22:52:17.018 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-08T22:52:17.018 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-08T22:52:17.018 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-08T22:52:17.018 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-08T22:52:17.026 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-08T22:52:17.027 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-08T22:52:17.028 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-08T22:52:17.607 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-08T22:52:17.612 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-08T22:52:17.613 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory5y7zrph2 --limit vm06.local,vm11.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-08T22:54:29.655 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm06.local'), Remote(name='ubuntu@vm11.local')] 2026-03-08T22:54:29.656 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-08T22:54:29.656 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T22:54:29.716 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-08T22:54:29.949 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-08T22:54:29.949 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm11.local' 2026-03-08T22:54:29.949 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm11.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T22:54:30.010 DEBUG:teuthology.orchestra.run.vm11:> true 2026-03-08T22:54:30.233 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm11.local' 2026-03-08T22:54:30.233 INFO:teuthology.run_tasks:Running task clock... 2026-03-08T22:54:30.236 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-08T22:54:30.236 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-08T22:54:30.236 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T22:54:30.238 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-08T22:54:30.238 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T22:54:30.257 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-08T22:54:30.257 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Command line: ntpd -gq 2026-03-08T22:54:30.257 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: ---------------------------------------------------- 2026-03-08T22:54:30.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: ntp-4 is maintained by Network Time Foundation, 2026-03-08T22:54:30.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-08T22:54:30.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: corporation. Support and training for ntp-4 are 2026-03-08T22:54:30.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: available at https://www.nwtime.org/support 2026-03-08T22:54:30.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: ---------------------------------------------------- 2026-03-08T22:54:30.259 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: proto: precision = 0.029 usec (-25) 2026-03-08T22:54:30.259 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: basedate set to 2022-02-04 2026-03-08T22:54:30.259 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: gps base set to 2022-02-06 (week 2196) 2026-03-08T22:54:30.259 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-08T22:54:30.259 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-08T22:54:30.259 INFO:teuthology.orchestra.run.vm06.stderr: 8 Mar 22:54:30 ntpd[15993]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 71 days ago 2026-03-08T22:54:30.260 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listen and drop on 0 v6wildcard [::]:123 2026-03-08T22:54:30.260 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-08T22:54:30.261 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listen normally on 2 lo 127.0.0.1:123 2026-03-08T22:54:30.261 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listen normally on 3 ens3 192.168.123.106:123 2026-03-08T22:54:30.261 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listen normally on 4 lo [::1]:123 2026-03-08T22:54:30.261 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:6%2]:123 2026-03-08T22:54:30.261 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:30 ntpd[15993]: Listening on routing socket on fd #22 for interface updates 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Command line: ntpd -gq 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: ---------------------------------------------------- 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: ntp-4 is maintained by Network Time Foundation, 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: corporation. Support and training for ntp-4 are 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: available at https://www.nwtime.org/support 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: ---------------------------------------------------- 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: proto: precision = 0.030 usec (-25) 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: basedate set to 2022-02-04 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: gps base set to 2022-02-06 (week 2196) 2026-03-08T22:54:30.293 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listen and drop on 0 v6wildcard [::]:123 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listen normally on 2 lo 127.0.0.1:123 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listen normally on 3 ens3 192.168.123.111:123 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listen normally on 4 lo [::1]:123 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:b%2]:123 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:30 ntpd[15954]: Listening on routing socket on fd #22 for interface updates 2026-03-08T22:54:30.294 INFO:teuthology.orchestra.run.vm11.stderr: 8 Mar 22:54:30 ntpd[15954]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 71 days ago 2026-03-08T22:54:31.259 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:31 ntpd[15993]: Soliciting pool server 172.236.195.26 2026-03-08T22:54:31.292 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:31 ntpd[15954]: Soliciting pool server 172.236.195.26 2026-03-08T22:54:32.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:32 ntpd[15993]: Soliciting pool server 90.187.112.137 2026-03-08T22:54:32.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:32 ntpd[15993]: Soliciting pool server 193.99.165.216 2026-03-08T22:54:32.291 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:32 ntpd[15954]: Soliciting pool server 90.187.112.137 2026-03-08T22:54:32.291 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:32 ntpd[15954]: Soliciting pool server 193.99.165.216 2026-03-08T22:54:33.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:33 ntpd[15993]: Soliciting pool server 212.132.97.26 2026-03-08T22:54:33.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:33 ntpd[15993]: Soliciting pool server 141.98.136.83 2026-03-08T22:54:33.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:33 ntpd[15993]: Soliciting pool server 129.250.35.250 2026-03-08T22:54:33.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:33 ntpd[15954]: Soliciting pool server 141.98.136.83 2026-03-08T22:54:33.291 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:33 ntpd[15954]: Soliciting pool server 129.250.35.250 2026-03-08T22:54:34.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:34 ntpd[15993]: Soliciting pool server 79.133.44.139 2026-03-08T22:54:34.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:34 ntpd[15993]: Soliciting pool server 139.162.152.20 2026-03-08T22:54:34.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:34 ntpd[15993]: Soliciting pool server 62.108.36.235 2026-03-08T22:54:34.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:34 ntpd[15993]: Soliciting pool server 93.241.86.156 2026-03-08T22:54:34.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:34 ntpd[15954]: Soliciting pool server 79.133.44.139 2026-03-08T22:54:34.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:34 ntpd[15954]: Soliciting pool server 62.108.36.235 2026-03-08T22:54:34.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:34 ntpd[15954]: Soliciting pool server 93.241.86.156 2026-03-08T22:54:35.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:35 ntpd[15993]: Soliciting pool server 85.121.52.237 2026-03-08T22:54:35.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:35 ntpd[15993]: Soliciting pool server 162.159.200.1 2026-03-08T22:54:35.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:35 ntpd[15993]: Soliciting pool server 217.144.138.234 2026-03-08T22:54:35.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:35 ntpd[15993]: Soliciting pool server 91.189.91.157 2026-03-08T22:54:35.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:35 ntpd[15954]: Soliciting pool server 85.121.52.237 2026-03-08T22:54:35.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:35 ntpd[15954]: Soliciting pool server 162.159.200.1 2026-03-08T22:54:35.290 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:35 ntpd[15954]: Soliciting pool server 91.189.91.157 2026-03-08T22:54:36.257 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:36 ntpd[15993]: Soliciting pool server 185.125.190.56 2026-03-08T22:54:36.257 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:36 ntpd[15993]: Soliciting pool server 51.75.67.47 2026-03-08T22:54:36.258 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:36 ntpd[15993]: Soliciting pool server 134.60.111.110 2026-03-08T22:54:36.289 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:36 ntpd[15954]: Soliciting pool server 185.125.190.56 2026-03-08T22:54:36.289 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:36 ntpd[15954]: Soliciting pool server 51.75.67.47 2026-03-08T22:54:36.289 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:36 ntpd[15954]: Soliciting pool server 134.60.111.110 2026-03-08T22:54:37.289 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:37 ntpd[15954]: Soliciting pool server 185.125.190.58 2026-03-08T22:54:37.289 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:37 ntpd[15954]: Soliciting pool server 134.60.1.30 2026-03-08T22:54:37.289 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:37 ntpd[15954]: Soliciting pool server 2001:1640:3::3 2026-03-08T22:54:38.288 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:38 ntpd[15954]: Soliciting pool server 185.125.190.57 2026-03-08T22:54:39.311 INFO:teuthology.orchestra.run.vm11.stdout: 8 Mar 22:54:39 ntpd[15954]: ntpd: time slew +0.022079 s 2026-03-08T22:54:39.311 INFO:teuthology.orchestra.run.vm11.stdout:ntpd: time slew +0.022079s 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout:============================================================================== 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:39.330 INFO:teuthology.orchestra.run.vm11.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:40.281 INFO:teuthology.orchestra.run.vm06.stdout: 8 Mar 22:54:40 ntpd[15993]: ntpd: time slew -0.001812 s 2026-03-08T22:54:40.281 INFO:teuthology.orchestra.run.vm06.stdout:ntpd: time slew -0.001812s 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:40.305 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T22:54:40.305 INFO:teuthology.run_tasks:Running task install... 2026-03-08T22:54:40.312 DEBUG:teuthology.task.install:project ceph 2026-03-08T22:54:40.312 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-08T22:54:40.312 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-08T22:54:40.312 INFO:teuthology.task.install:Using flavor: default 2026-03-08T22:54:40.315 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-08T22:54:40.315 INFO:teuthology.task.install:extra packages: [] 2026-03-08T22:54:40.315 DEBUG:teuthology.orchestra.run.vm06:> sudo apt-key list | grep Ceph 2026-03-08T22:54:40.315 DEBUG:teuthology.orchestra.run.vm11:> sudo apt-key list | grep Ceph 2026-03-08T22:54:40.356 INFO:teuthology.orchestra.run.vm11.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-08T22:54:40.377 INFO:teuthology.orchestra.run.vm11.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-08T22:54:40.377 INFO:teuthology.orchestra.run.vm11.stdout:uid [ unknown] Ceph.com (release key) 2026-03-08T22:54:40.377 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-08T22:54:40.377 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-08T22:54:40.377 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:54:40.440 INFO:teuthology.orchestra.run.vm06.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-08T22:54:40.441 INFO:teuthology.orchestra.run.vm06.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-08T22:54:40.441 INFO:teuthology.orchestra.run.vm06.stdout:uid [ unknown] Ceph.com (release key) 2026-03-08T22:54:40.441 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-08T22:54:40.441 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-08T22:54:40.441 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:54:41.020 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-08T22:54:41.020 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:54:41.036 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-08T22:54:41.036 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:54:41.506 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:54:41.506 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-08T22:54:41.514 DEBUG:teuthology.orchestra.run.vm06:> sudo apt-get update 2026-03-08T22:54:41.553 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:54:41.553 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-08T22:54:41.561 DEBUG:teuthology.orchestra.run.vm11:> sudo apt-get update 2026-03-08T22:54:41.687 INFO:teuthology.orchestra.run.vm06.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T22:54:41.690 INFO:teuthology.orchestra.run.vm06.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T22:54:41.699 INFO:teuthology.orchestra.run.vm06.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T22:54:41.803 INFO:teuthology.orchestra.run.vm06.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T22:54:41.878 INFO:teuthology.orchestra.run.vm11.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T22:54:42.099 INFO:teuthology.orchestra.run.vm11.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T22:54:42.175 INFO:teuthology.orchestra.run.vm06.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-08T22:54:42.182 INFO:teuthology.orchestra.run.vm11.stdout:Ign:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-08T22:54:42.194 INFO:teuthology.orchestra.run.vm11.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T22:54:42.290 INFO:teuthology.orchestra.run.vm11.stdout:Hit:5 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T22:54:42.293 INFO:teuthology.orchestra.run.vm06.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-08T22:54:42.302 INFO:teuthology.orchestra.run.vm11.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-08T22:54:42.412 INFO:teuthology.orchestra.run.vm06.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-08T22:54:42.423 INFO:teuthology.orchestra.run.vm11.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-08T22:54:42.530 INFO:teuthology.orchestra.run.vm06.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-08T22:54:42.543 INFO:teuthology.orchestra.run.vm11.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-08T22:54:42.614 INFO:teuthology.orchestra.run.vm06.stdout:Fetched 25.8 kB in 1s (27.4 kB/s) 2026-03-08T22:54:42.625 INFO:teuthology.orchestra.run.vm11.stdout:Fetched 25.8 kB in 1s (28.8 kB/s) 2026-03-08T22:54:43.351 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T22:54:43.365 DEBUG:teuthology.orchestra.run.vm06:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:54:43.380 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T22:54:43.396 DEBUG:teuthology.orchestra.run.vm11:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:54:43.399 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T22:54:43.432 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T22:54:43.637 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T22:54:43.638 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T22:54:43.643 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T22:54:43.643 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T22:54:43.904 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T22:54:43.905 INFO:teuthology.orchestra.run.vm11.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T22:54:43.906 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T22:54:43.906 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T22:54:43.907 INFO:teuthology.orchestra.run.vm11.stdout:The following additional packages will be installed: 2026-03-08T22:54:43.907 INFO:teuthology.orchestra.run.vm11.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-08T22:54:43.907 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-08T22:54:43.908 INFO:teuthology.orchestra.run.vm11.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T22:54:43.908 INFO:teuthology.orchestra.run.vm11.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T22:54:43.908 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-08T22:54:43.909 INFO:teuthology.orchestra.run.vm11.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T22:54:43.909 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T22:54:43.909 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T22:54:43.909 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T22:54:43.909 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T22:54:43.909 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T22:54:43.910 INFO:teuthology.orchestra.run.vm11.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout:Suggested packages: 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout: smart-notifier mailx | mailutils 2026-03-08T22:54:43.912 INFO:teuthology.orchestra.run.vm11.stdout:Recommended packages: 2026-03-08T22:54:43.913 INFO:teuthology.orchestra.run.vm11.stdout: btrfs-tools 2026-03-08T22:54:43.923 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T22:54:43.924 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T22:54:43.924 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T22:54:43.924 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout:The following additional packages will be installed: 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T22:54:43.925 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout:Suggested packages: 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: smart-notifier mailx | mailutils 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout:Recommended packages: 2026-03-08T22:54:43.926 INFO:teuthology.orchestra.run.vm06.stdout: btrfs-tools 2026-03-08T22:54:43.960 INFO:teuthology.orchestra.run.vm11.stdout:The following NEW packages will be installed: 2026-03-08T22:54:43.961 INFO:teuthology.orchestra.run.vm11.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-08T22:54:43.961 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-08T22:54:43.961 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-08T22:54:43.961 INFO:teuthology.orchestra.run.vm11.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-08T22:54:43.961 INFO:teuthology.orchestra.run.vm11.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-08T22:54:43.961 INFO:teuthology.orchestra.run.vm11.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-08T22:54:43.962 INFO:teuthology.orchestra.run.vm11.stdout: socat unzip xmlstarlet zip 2026-03-08T22:54:43.963 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be upgraded: 2026-03-08T22:54:43.963 INFO:teuthology.orchestra.run.vm11.stdout: librados2 librbd1 2026-03-08T22:54:43.973 INFO:teuthology.orchestra.run.vm06.stdout:The following NEW packages will be installed: 2026-03-08T22:54:43.973 INFO:teuthology.orchestra.run.vm06.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-08T22:54:43.973 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-08T22:54:43.973 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-08T22:54:43.974 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-08T22:54:43.974 INFO:teuthology.orchestra.run.vm06.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-08T22:54:43.974 INFO:teuthology.orchestra.run.vm06.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-08T22:54:43.974 INFO:teuthology.orchestra.run.vm06.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout: socat unzip xmlstarlet zip 2026-03-08T22:54:43.975 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be upgraded: 2026-03-08T22:54:43.976 INFO:teuthology.orchestra.run.vm06.stdout: librados2 librbd1 2026-03-08T22:54:44.061 INFO:teuthology.orchestra.run.vm11.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T22:54:44.061 INFO:teuthology.orchestra.run.vm11.stdout:Need to get 178 MB of archives. 2026-03-08T22:54:44.061 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-08T22:54:44.061 INFO:teuthology.orchestra.run.vm11.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-08T22:54:44.111 INFO:teuthology.orchestra.run.vm11.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-08T22:54:44.111 INFO:teuthology.orchestra.run.vm11.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-08T22:54:44.124 INFO:teuthology.orchestra.run.vm11.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-08T22:54:44.167 INFO:teuthology.orchestra.run.vm11.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-08T22:54:44.169 INFO:teuthology.orchestra.run.vm11.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-08T22:54:44.176 INFO:teuthology.orchestra.run.vm11.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-08T22:54:44.178 INFO:teuthology.orchestra.run.vm11.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-08T22:54:44.178 INFO:teuthology.orchestra.run.vm11.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-08T22:54:44.179 INFO:teuthology.orchestra.run.vm11.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-08T22:54:44.179 INFO:teuthology.orchestra.run.vm11.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-08T22:54:44.190 INFO:teuthology.orchestra.run.vm06.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T22:54:44.190 INFO:teuthology.orchestra.run.vm06.stdout:Need to get 178 MB of archives. 2026-03-08T22:54:44.190 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-08T22:54:44.190 INFO:teuthology.orchestra.run.vm06.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-08T22:54:44.208 INFO:teuthology.orchestra.run.vm11.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-08T22:54:44.208 INFO:teuthology.orchestra.run.vm11.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-08T22:54:44.209 INFO:teuthology.orchestra.run.vm11.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-08T22:54:44.210 INFO:teuthology.orchestra.run.vm11.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-08T22:54:44.210 INFO:teuthology.orchestra.run.vm11.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-08T22:54:44.211 INFO:teuthology.orchestra.run.vm11.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-08T22:54:44.211 INFO:teuthology.orchestra.run.vm11.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-08T22:54:44.212 INFO:teuthology.orchestra.run.vm11.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-08T22:54:44.213 INFO:teuthology.orchestra.run.vm11.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-08T22:54:44.216 INFO:teuthology.orchestra.run.vm11.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-08T22:54:44.220 INFO:teuthology.orchestra.run.vm11.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-08T22:54:44.221 INFO:teuthology.orchestra.run.vm11.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-08T22:54:44.221 INFO:teuthology.orchestra.run.vm11.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-08T22:54:44.221 INFO:teuthology.orchestra.run.vm11.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-08T22:54:44.221 INFO:teuthology.orchestra.run.vm11.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-08T22:54:44.221 INFO:teuthology.orchestra.run.vm11.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-08T22:54:44.221 INFO:teuthology.orchestra.run.vm11.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-08T22:54:44.223 INFO:teuthology.orchestra.run.vm11.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-08T22:54:44.224 INFO:teuthology.orchestra.run.vm11.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-08T22:54:44.231 INFO:teuthology.orchestra.run.vm11.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-08T22:54:44.231 INFO:teuthology.orchestra.run.vm11.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-08T22:54:44.231 INFO:teuthology.orchestra.run.vm11.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-08T22:54:44.232 INFO:teuthology.orchestra.run.vm11.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-08T22:54:44.232 INFO:teuthology.orchestra.run.vm11.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-08T22:54:44.233 INFO:teuthology.orchestra.run.vm11.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-08T22:54:44.233 INFO:teuthology.orchestra.run.vm11.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-08T22:54:44.237 INFO:teuthology.orchestra.run.vm11.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-08T22:54:44.238 INFO:teuthology.orchestra.run.vm11.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-08T22:54:44.242 INFO:teuthology.orchestra.run.vm11.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-08T22:54:44.245 INFO:teuthology.orchestra.run.vm11.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-08T22:54:44.246 INFO:teuthology.orchestra.run.vm11.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-08T22:54:44.247 INFO:teuthology.orchestra.run.vm11.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-08T22:54:44.248 INFO:teuthology.orchestra.run.vm11.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-08T22:54:44.250 INFO:teuthology.orchestra.run.vm11.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-08T22:54:44.250 INFO:teuthology.orchestra.run.vm11.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-08T22:54:44.250 INFO:teuthology.orchestra.run.vm11.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-08T22:54:44.278 INFO:teuthology.orchestra.run.vm11.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-08T22:54:44.281 INFO:teuthology.orchestra.run.vm11.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-08T22:54:44.282 INFO:teuthology.orchestra.run.vm11.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-08T22:54:44.288 INFO:teuthology.orchestra.run.vm11.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-08T22:54:44.288 INFO:teuthology.orchestra.run.vm11.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-08T22:54:44.288 INFO:teuthology.orchestra.run.vm11.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-08T22:54:44.288 INFO:teuthology.orchestra.run.vm11.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-08T22:54:44.289 INFO:teuthology.orchestra.run.vm11.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-08T22:54:44.289 INFO:teuthology.orchestra.run.vm11.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-08T22:54:44.291 INFO:teuthology.orchestra.run.vm11.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-08T22:54:44.298 INFO:teuthology.orchestra.run.vm11.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-08T22:54:44.299 INFO:teuthology.orchestra.run.vm11.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-08T22:54:44.299 INFO:teuthology.orchestra.run.vm11.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-08T22:54:44.302 INFO:teuthology.orchestra.run.vm11.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-08T22:54:44.305 INFO:teuthology.orchestra.run.vm11.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-08T22:54:44.305 INFO:teuthology.orchestra.run.vm11.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-08T22:54:44.306 INFO:teuthology.orchestra.run.vm11.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-08T22:54:44.342 INFO:teuthology.orchestra.run.vm11.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-08T22:54:44.342 INFO:teuthology.orchestra.run.vm11.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-08T22:54:44.345 INFO:teuthology.orchestra.run.vm11.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-08T22:54:44.346 INFO:teuthology.orchestra.run.vm11.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-08T22:54:44.346 INFO:teuthology.orchestra.run.vm11.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-08T22:54:44.346 INFO:teuthology.orchestra.run.vm11.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-08T22:54:44.347 INFO:teuthology.orchestra.run.vm11.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-08T22:54:44.347 INFO:teuthology.orchestra.run.vm11.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-08T22:54:44.352 INFO:teuthology.orchestra.run.vm11.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-08T22:54:44.352 INFO:teuthology.orchestra.run.vm11.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-08T22:54:44.358 INFO:teuthology.orchestra.run.vm11.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-08T22:54:44.359 INFO:teuthology.orchestra.run.vm11.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-08T22:54:44.360 INFO:teuthology.orchestra.run.vm11.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-08T22:54:44.406 INFO:teuthology.orchestra.run.vm11.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-08T22:54:44.410 INFO:teuthology.orchestra.run.vm06.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-08T22:54:44.419 INFO:teuthology.orchestra.run.vm06.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-08T22:54:44.479 INFO:teuthology.orchestra.run.vm06.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-08T22:54:44.573 INFO:teuthology.orchestra.run.vm06.stdout:Get:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-08T22:54:44.600 INFO:teuthology.orchestra.run.vm11.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-08T22:54:44.616 INFO:teuthology.orchestra.run.vm06.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-08T22:54:44.617 INFO:teuthology.orchestra.run.vm06.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-08T22:54:44.654 INFO:teuthology.orchestra.run.vm06.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-08T22:54:44.658 INFO:teuthology.orchestra.run.vm06.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-08T22:54:44.659 INFO:teuthology.orchestra.run.vm06.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-08T22:54:44.659 INFO:teuthology.orchestra.run.vm06.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-08T22:54:44.660 INFO:teuthology.orchestra.run.vm06.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-08T22:54:44.672 INFO:teuthology.orchestra.run.vm06.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-08T22:54:44.674 INFO:teuthology.orchestra.run.vm06.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-08T22:54:44.677 INFO:teuthology.orchestra.run.vm06.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-08T22:54:44.693 INFO:teuthology.orchestra.run.vm06.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-08T22:54:44.693 INFO:teuthology.orchestra.run.vm06.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-08T22:54:44.722 INFO:teuthology.orchestra.run.vm06.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-08T22:54:44.724 INFO:teuthology.orchestra.run.vm06.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-08T22:54:44.726 INFO:teuthology.orchestra.run.vm06.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-08T22:54:44.726 INFO:teuthology.orchestra.run.vm06.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-08T22:54:44.727 INFO:teuthology.orchestra.run.vm06.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-08T22:54:44.728 INFO:teuthology.orchestra.run.vm06.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-08T22:54:44.728 INFO:teuthology.orchestra.run.vm06.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-08T22:54:44.730 INFO:teuthology.orchestra.run.vm06.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-08T22:54:44.764 INFO:teuthology.orchestra.run.vm06.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-08T22:54:44.765 INFO:teuthology.orchestra.run.vm06.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-08T22:54:44.765 INFO:teuthology.orchestra.run.vm06.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-08T22:54:44.765 INFO:teuthology.orchestra.run.vm06.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-08T22:54:44.766 INFO:teuthology.orchestra.run.vm06.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-08T22:54:44.767 INFO:teuthology.orchestra.run.vm06.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-08T22:54:44.767 INFO:teuthology.orchestra.run.vm06.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-08T22:54:44.767 INFO:teuthology.orchestra.run.vm06.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-08T22:54:44.767 INFO:teuthology.orchestra.run.vm06.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-08T22:54:44.802 INFO:teuthology.orchestra.run.vm06.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-08T22:54:44.803 INFO:teuthology.orchestra.run.vm06.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-08T22:54:44.803 INFO:teuthology.orchestra.run.vm06.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-08T22:54:44.804 INFO:teuthology.orchestra.run.vm06.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-08T22:54:44.838 INFO:teuthology.orchestra.run.vm06.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-08T22:54:44.838 INFO:teuthology.orchestra.run.vm06.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-08T22:54:44.839 INFO:teuthology.orchestra.run.vm06.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-08T22:54:44.839 INFO:teuthology.orchestra.run.vm06.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-08T22:54:44.839 INFO:teuthology.orchestra.run.vm06.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-08T22:54:44.841 INFO:teuthology.orchestra.run.vm06.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-08T22:54:44.874 INFO:teuthology.orchestra.run.vm06.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-08T22:54:44.885 INFO:teuthology.orchestra.run.vm06.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-08T22:54:44.885 INFO:teuthology.orchestra.run.vm06.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-08T22:54:44.885 INFO:teuthology.orchestra.run.vm06.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-08T22:54:44.909 INFO:teuthology.orchestra.run.vm06.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-08T22:54:44.912 INFO:teuthology.orchestra.run.vm06.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-08T22:54:44.913 INFO:teuthology.orchestra.run.vm06.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-08T22:54:44.929 INFO:teuthology.orchestra.run.vm06.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-08T22:54:44.929 INFO:teuthology.orchestra.run.vm06.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-08T22:54:44.929 INFO:teuthology.orchestra.run.vm06.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-08T22:54:44.945 INFO:teuthology.orchestra.run.vm06.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-08T22:54:44.945 INFO:teuthology.orchestra.run.vm06.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-08T22:54:44.945 INFO:teuthology.orchestra.run.vm06.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-08T22:54:44.948 INFO:teuthology.orchestra.run.vm06.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-08T22:54:44.981 INFO:teuthology.orchestra.run.vm06.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-08T22:54:44.981 INFO:teuthology.orchestra.run.vm06.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-08T22:54:44.982 INFO:teuthology.orchestra.run.vm06.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-08T22:54:44.986 INFO:teuthology.orchestra.run.vm06.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-08T22:54:44.989 INFO:teuthology.orchestra.run.vm06.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-08T22:54:44.990 INFO:teuthology.orchestra.run.vm06.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-08T22:54:45.016 INFO:teuthology.orchestra.run.vm06.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-08T22:54:45.020 INFO:teuthology.orchestra.run.vm06.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-08T22:54:45.020 INFO:teuthology.orchestra.run.vm06.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-08T22:54:45.023 INFO:teuthology.orchestra.run.vm06.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-08T22:54:45.051 INFO:teuthology.orchestra.run.vm06.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-08T22:54:45.052 INFO:teuthology.orchestra.run.vm06.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-08T22:54:45.052 INFO:teuthology.orchestra.run.vm06.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-08T22:54:45.054 INFO:teuthology.orchestra.run.vm06.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-08T22:54:45.055 INFO:teuthology.orchestra.run.vm06.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-08T22:54:45.110 INFO:teuthology.orchestra.run.vm06.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-08T22:54:45.110 INFO:teuthology.orchestra.run.vm06.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-08T22:54:45.110 INFO:teuthology.orchestra.run.vm06.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-08T22:54:45.111 INFO:teuthology.orchestra.run.vm06.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-08T22:54:45.112 INFO:teuthology.orchestra.run.vm06.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-08T22:54:45.149 INFO:teuthology.orchestra.run.vm06.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-08T22:54:45.462 INFO:teuthology.orchestra.run.vm11.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-08T22:54:45.500 INFO:teuthology.orchestra.run.vm06.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-08T22:54:45.592 INFO:teuthology.orchestra.run.vm11.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-08T22:54:45.600 INFO:teuthology.orchestra.run.vm11.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-08T22:54:45.604 INFO:teuthology.orchestra.run.vm11.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-08T22:54:45.605 INFO:teuthology.orchestra.run.vm11.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-08T22:54:45.609 INFO:teuthology.orchestra.run.vm11.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-08T22:54:45.611 INFO:teuthology.orchestra.run.vm11.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-08T22:54:45.619 INFO:teuthology.orchestra.run.vm11.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-08T22:54:45.748 INFO:teuthology.orchestra.run.vm06.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-08T22:54:45.751 INFO:teuthology.orchestra.run.vm06.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-08T22:54:45.752 INFO:teuthology.orchestra.run.vm06.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-08T22:54:45.752 INFO:teuthology.orchestra.run.vm06.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-08T22:54:45.753 INFO:teuthology.orchestra.run.vm06.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-08T22:54:45.753 INFO:teuthology.orchestra.run.vm06.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-08T22:54:45.858 INFO:teuthology.orchestra.run.vm06.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-08T22:54:45.947 INFO:teuthology.orchestra.run.vm11.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-08T22:54:45.949 INFO:teuthology.orchestra.run.vm11.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-08T22:54:45.960 INFO:teuthology.orchestra.run.vm11.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-08T22:54:46.186 INFO:teuthology.orchestra.run.vm06.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-08T22:54:46.187 INFO:teuthology.orchestra.run.vm06.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-08T22:54:46.213 INFO:teuthology.orchestra.run.vm06.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-08T22:54:47.125 INFO:teuthology.orchestra.run.vm11.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-08T22:54:47.285 INFO:teuthology.orchestra.run.vm11.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-08T22:54:47.297 INFO:teuthology.orchestra.run.vm11.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-08T22:54:47.299 INFO:teuthology.orchestra.run.vm11.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-08T22:54:47.349 INFO:teuthology.orchestra.run.vm06.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-08T22:54:47.376 INFO:teuthology.orchestra.run.vm11.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-08T22:54:47.565 INFO:teuthology.orchestra.run.vm06.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-08T22:54:47.568 INFO:teuthology.orchestra.run.vm06.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-08T22:54:47.585 INFO:teuthology.orchestra.run.vm06.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-08T22:54:47.626 INFO:teuthology.orchestra.run.vm11.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-08T22:54:47.643 INFO:teuthology.orchestra.run.vm06.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-08T22:54:47.895 INFO:teuthology.orchestra.run.vm06.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-08T22:54:48.543 INFO:teuthology.orchestra.run.vm11.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-08T22:54:48.543 INFO:teuthology.orchestra.run.vm11.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-08T22:54:48.577 INFO:teuthology.orchestra.run.vm11.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-08T22:54:48.673 INFO:teuthology.orchestra.run.vm11.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-08T22:54:48.700 INFO:teuthology.orchestra.run.vm11.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-08T22:54:48.733 INFO:teuthology.orchestra.run.vm11.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-08T22:54:48.800 INFO:teuthology.orchestra.run.vm11.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-08T22:54:48.816 INFO:teuthology.orchestra.run.vm06.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-08T22:54:48.816 INFO:teuthology.orchestra.run.vm06.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-08T22:54:48.850 INFO:teuthology.orchestra.run.vm06.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-08T22:54:48.949 INFO:teuthology.orchestra.run.vm06.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-08T22:54:48.982 INFO:teuthology.orchestra.run.vm06.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-08T22:54:48.993 INFO:teuthology.orchestra.run.vm06.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-08T22:54:49.069 INFO:teuthology.orchestra.run.vm06.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-08T22:54:49.154 INFO:teuthology.orchestra.run.vm11.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-08T22:54:49.154 INFO:teuthology.orchestra.run.vm11.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-08T22:54:49.418 INFO:teuthology.orchestra.run.vm06.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-08T22:54:49.418 INFO:teuthology.orchestra.run.vm06.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-08T22:54:51.249 INFO:teuthology.orchestra.run.vm11.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-08T22:54:51.249 INFO:teuthology.orchestra.run.vm11.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-08T22:54:51.250 INFO:teuthology.orchestra.run.vm11.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-08T22:54:51.386 INFO:teuthology.orchestra.run.vm06.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-08T22:54:51.387 INFO:teuthology.orchestra.run.vm06.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-08T22:54:51.387 INFO:teuthology.orchestra.run.vm06.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-08T22:54:51.776 INFO:teuthology.orchestra.run.vm11.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-08T22:54:51.902 INFO:teuthology.orchestra.run.vm06.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-08T22:54:52.104 INFO:teuthology.orchestra.run.vm11.stdout:Fetched 178 MB in 8s (22.8 MB/s) 2026-03-08T22:54:52.220 INFO:teuthology.orchestra.run.vm06.stdout:Fetched 178 MB in 8s (22.5 MB/s) 2026-03-08T22:54:52.549 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-08T22:54:52.559 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-08T22:54:52.590 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-08T22:54:52.590 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-08T22:54:52.592 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-08T22:54:52.592 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-08T22:54:52.600 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T22:54:52.600 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T22:54:52.643 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-08T22:54:52.644 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-08T22:54:52.649 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-08T22:54:52.649 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-08T22:54:52.651 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T22:54:52.651 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T22:54:52.690 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-08T22:54:52.690 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-08T22:54:52.696 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-08T22:54:52.697 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-08T22:54:52.698 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T22:54:52.698 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T22:54:52.748 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-08T22:54:52.748 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-08T22:54:52.754 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T22:54:52.754 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T22:54:52.759 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:54:52.759 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:54:52.830 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-08T22:54:52.836 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T22:54:52.838 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:54:52.838 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-08T22:54:52.844 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T22:54:52.849 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:54:52.878 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-08T22:54:52.882 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-08T22:54:52.883 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T22:54:52.885 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:54:52.887 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-08T22:54:52.888 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:54:52.918 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-08T22:54:52.918 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-08T22:54:52.924 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-08T22:54:52.924 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-08T22:54:52.925 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T22:54:52.925 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T22:54:52.953 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:52.955 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:52.955 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T22:54:52.958 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T22:54:53.052 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.055 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T22:54:53.057 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.060 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-08T22:54:53.155 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libnbd0. 2026-03-08T22:54:53.155 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libnbd0. 2026-03-08T22:54:53.155 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-08T22:54:53.156 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-08T22:54:53.162 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-08T22:54:53.163 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-08T22:54:53.171 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libcephfs2. 2026-03-08T22:54:53.177 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.178 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.182 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libcephfs2. 2026-03-08T22:54:53.188 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.189 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.207 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-rados. 2026-03-08T22:54:53.213 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.214 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.218 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rados. 2026-03-08T22:54:53.224 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.224 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.234 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-08T22:54:53.241 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:53.242 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.247 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-08T22:54:53.252 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:53.253 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.258 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-cephfs. 2026-03-08T22:54:53.263 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.264 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.269 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cephfs. 2026-03-08T22:54:53.274 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.274 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.283 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-08T22:54:53.288 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:53.289 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.293 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-08T22:54:53.299 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:53.300 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.311 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-08T22:54:53.316 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-08T22:54:53.316 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T22:54:53.323 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-08T22:54:53.329 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-08T22:54:53.329 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T22:54:53.333 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-prettytable. 2026-03-08T22:54:53.337 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-08T22:54:53.338 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-08T22:54:53.353 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-prettytable. 2026-03-08T22:54:53.359 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-08T22:54:53.360 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-rbd. 2026-03-08T22:54:53.361 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-08T22:54:53.364 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.367 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.394 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rbd. 2026-03-08T22:54:53.399 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.404 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.405 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-08T22:54:53.409 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-08T22:54:53.411 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T22:54:53.453 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-08T22:54:53.454 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-08T22:54:53.459 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-08T22:54:53.460 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-08T22:54:53.461 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T22:54:53.462 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T22:54:53.496 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-08T22:54:53.502 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-08T22:54:53.503 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-08T22:54:53.503 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T22:54:53.508 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-08T22:54:53.509 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T22:54:53.542 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package lua5.1. 2026-03-08T22:54:53.544 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-08T22:54:53.548 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-08T22:54:53.550 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-08T22:54:53.551 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-08T22:54:53.555 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T22:54:53.593 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package lua-any. 2026-03-08T22:54:53.594 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package lua5.1. 2026-03-08T22:54:53.599 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-08T22:54:53.600 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-08T22:54:53.601 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-08T22:54:53.606 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-08T22:54:53.628 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package zip. 2026-03-08T22:54:53.634 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-08T22:54:53.635 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking zip (3.0-12build2) ... 2026-03-08T22:54:53.643 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package lua-any. 2026-03-08T22:54:53.649 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-08T22:54:53.650 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-08T22:54:53.673 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package unzip. 2026-03-08T22:54:53.678 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package zip. 2026-03-08T22:54:53.679 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-08T22:54:53.684 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-08T22:54:53.685 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-08T22:54:53.686 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking zip (3.0-12build2) ... 2026-03-08T22:54:53.720 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package luarocks. 2026-03-08T22:54:53.720 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package unzip. 2026-03-08T22:54:53.725 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-08T22:54:53.727 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-08T22:54:53.727 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-08T22:54:53.733 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-08T22:54:53.777 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package luarocks. 2026-03-08T22:54:53.783 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-08T22:54:53.784 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-08T22:54:53.798 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package librgw2. 2026-03-08T22:54:53.804 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.805 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:53.846 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package librgw2. 2026-03-08T22:54:53.852 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:53.853 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.004 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-rgw. 2026-03-08T22:54:54.006 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.007 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rgw. 2026-03-08T22:54:54.011 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.012 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.013 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.052 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-08T22:54:54.053 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-08T22:54:54.055 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-08T22:54:54.059 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-08T22:54:54.060 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T22:54:54.060 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T22:54:54.122 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libradosstriper1. 2026-03-08T22:54:54.128 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libradosstriper1. 2026-03-08T22:54:54.128 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.129 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.133 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.134 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.168 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-common. 2026-03-08T22:54:54.173 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.174 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.179 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-common. 2026-03-08T22:54:54.184 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.188 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.695 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-base. 2026-03-08T22:54:54.699 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-base. 2026-03-08T22:54:54.700 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.702 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:54.705 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.707 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:54.849 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-08T22:54:54.855 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-08T22:54:54.856 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-08T22:54:54.857 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-08T22:54:54.859 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-08T22:54:54.860 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-08T22:54:54.873 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-cheroot. 2026-03-08T22:54:54.876 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cheroot. 2026-03-08T22:54:54.879 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-08T22:54:54.879 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T22:54:54.881 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-08T22:54:54.883 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T22:54:54.898 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-08T22:54:54.903 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-08T22:54:54.904 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-08T22:54:54.905 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-08T22:54:54.909 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-08T22:54:54.909 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-08T22:54:54.920 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-08T22:54:54.924 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-08T22:54:54.926 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-08T22:54:54.927 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-08T22:54:54.930 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-08T22:54:54.931 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-08T22:54:54.945 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-08T22:54:54.947 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-08T22:54:54.951 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-08T22:54:54.952 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-08T22:54:54.953 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-08T22:54:54.954 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-08T22:54:54.970 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-tempora. 2026-03-08T22:54:54.970 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-tempora. 2026-03-08T22:54:54.976 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-08T22:54:54.976 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-08T22:54:54.977 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-08T22:54:54.978 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-08T22:54:54.994 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-portend. 2026-03-08T22:54:54.995 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-portend. 2026-03-08T22:54:55.000 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-08T22:54:55.001 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-08T22:54:55.002 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-08T22:54:55.003 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-08T22:54:55.016 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-08T22:54:55.021 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-08T22:54:55.021 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-08T22:54:55.022 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-08T22:54:55.028 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-08T22:54:55.029 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-08T22:54:55.037 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-08T22:54:55.042 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-08T22:54:55.044 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-08T22:54:55.053 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-08T22:54:55.058 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-08T22:54:55.060 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-08T22:54:55.078 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-natsort. 2026-03-08T22:54:55.084 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-08T22:54:55.085 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-08T22:54:55.090 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-natsort. 2026-03-08T22:54:55.096 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-08T22:54:55.097 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-08T22:54:55.106 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-logutils. 2026-03-08T22:54:55.112 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-08T22:54:55.113 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-08T22:54:55.116 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-logutils. 2026-03-08T22:54:55.123 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-08T22:54:55.124 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-08T22:54:55.134 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-mako. 2026-03-08T22:54:55.140 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-08T22:54:55.141 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T22:54:55.149 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-mako. 2026-03-08T22:54:55.151 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-08T22:54:55.152 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T22:54:55.162 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-08T22:54:55.169 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-08T22:54:55.170 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-08T22:54:55.175 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-08T22:54:55.181 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-08T22:54:55.182 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-08T22:54:55.188 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-08T22:54:55.195 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-08T22:54:55.196 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-08T22:54:55.198 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-08T22:54:55.204 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-08T22:54:55.205 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-08T22:54:55.213 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-webob. 2026-03-08T22:54:55.219 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-webob. 2026-03-08T22:54:55.220 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-08T22:54:55.221 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T22:54:55.225 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-08T22:54:55.226 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T22:54:55.244 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-waitress. 2026-03-08T22:54:55.246 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-waitress. 2026-03-08T22:54:55.250 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-08T22:54:55.252 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-08T22:54:55.253 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T22:54:55.254 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T22:54:55.271 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-tempita. 2026-03-08T22:54:55.275 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-tempita. 2026-03-08T22:54:55.277 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-08T22:54:55.278 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T22:54:55.281 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-08T22:54:55.281 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T22:54:55.294 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-paste. 2026-03-08T22:54:55.296 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-paste. 2026-03-08T22:54:55.300 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-08T22:54:55.301 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T22:54:55.302 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-08T22:54:55.302 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T22:54:55.339 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-08T22:54:55.340 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-08T22:54:55.346 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-08T22:54:55.347 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T22:54:55.347 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-08T22:54:55.348 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T22:54:55.364 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-08T22:54:55.364 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-08T22:54:55.370 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-08T22:54:55.371 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-08T22:54:55.371 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-08T22:54:55.371 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-08T22:54:55.390 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-webtest. 2026-03-08T22:54:55.391 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-webtest. 2026-03-08T22:54:55.396 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-08T22:54:55.397 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-08T22:54:55.397 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-08T22:54:55.398 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-08T22:54:55.416 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pecan. 2026-03-08T22:54:55.417 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pecan. 2026-03-08T22:54:55.422 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-08T22:54:55.423 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-08T22:54:55.424 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T22:54:55.424 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T22:54:55.459 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-08T22:54:55.461 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-08T22:54:55.465 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-08T22:54:55.466 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T22:54:55.468 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-08T22:54:55.469 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T22:54:55.489 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-08T22:54:55.494 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:55.495 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.496 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-08T22:54:55.502 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:55.503 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.534 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-08T22:54:55.539 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.540 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.540 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-08T22:54:55.546 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.547 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.553 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mgr. 2026-03-08T22:54:55.557 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.558 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.563 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr. 2026-03-08T22:54:55.568 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.570 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.590 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mon. 2026-03-08T22:54:55.594 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.595 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.602 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mon. 2026-03-08T22:54:55.608 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.609 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.738 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-08T22:54:55.738 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-08T22:54:55.743 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-08T22:54:55.743 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-08T22:54:55.744 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T22:54:55.745 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T22:54:55.763 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-osd. 2026-03-08T22:54:55.766 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-osd. 2026-03-08T22:54:55.768 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.769 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:55.772 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:55.773 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.244 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph. 2026-03-08T22:54:56.248 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.248 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph. 2026-03-08T22:54:56.249 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.254 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.256 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.298 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-fuse. 2026-03-08T22:54:56.300 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-fuse. 2026-03-08T22:54:56.303 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.306 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.307 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.307 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.390 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mds. 2026-03-08T22:54:56.396 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.399 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.400 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mds. 2026-03-08T22:54:56.407 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.413 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.509 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package cephadm. 2026-03-08T22:54:56.513 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package cephadm. 2026-03-08T22:54:56.515 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.515 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:56.522 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.522 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.583 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-08T22:54:56.584 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-08T22:54:56.589 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T22:54:56.589 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T22:54:56.594 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T22:54:56.594 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T22:54:56.687 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-08T22:54:56.688 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-08T22:54:56.694 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:56.694 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:56.697 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.697 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.782 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-08T22:54:56.784 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-08T22:54:56.788 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-08T22:54:56.788 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-08T22:54:56.795 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-08T22:54:56.795 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-08T22:54:56.876 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-routes. 2026-03-08T22:54:56.876 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-routes. 2026-03-08T22:54:56.882 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-08T22:54:56.882 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-08T22:54:56.889 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T22:54:56.890 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T22:54:56.958 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-08T22:54:56.958 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-08T22:54:56.964 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:56.964 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:56.971 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:56.971 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:57.528 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-08T22:54:57.532 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-08T22:54:57.535 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-08T22:54:57.535 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-08T22:54:57.537 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T22:54:57.537 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T22:54:57.645 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-joblib. 2026-03-08T22:54:57.645 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-joblib. 2026-03-08T22:54:57.651 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-08T22:54:57.652 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-08T22:54:57.653 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T22:54:57.654 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T22:54:57.721 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-08T22:54:57.721 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-08T22:54:57.727 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-08T22:54:57.728 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-08T22:54:57.729 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-08T22:54:57.729 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-08T22:54:57.766 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-sklearn. 2026-03-08T22:54:57.772 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-sklearn. 2026-03-08T22:54:57.772 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-08T22:54:57.774 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T22:54:57.778 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-08T22:54:57.784 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T22:54:57.932 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-08T22:54:57.936 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:57.938 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:57.954 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-08T22:54:57.960 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:57.964 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:58.341 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-cachetools. 2026-03-08T22:54:58.346 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-08T22:54:58.348 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-08T22:54:58.348 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cachetools. 2026-03-08T22:54:58.352 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-08T22:54:58.353 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-08T22:54:58.369 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-rsa. 2026-03-08T22:54:58.374 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-08T22:54:58.374 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rsa. 2026-03-08T22:54:58.376 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-08T22:54:58.381 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-08T22:54:58.382 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-08T22:54:58.403 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-google-auth. 2026-03-08T22:54:58.406 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-google-auth. 2026-03-08T22:54:58.411 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-08T22:54:58.411 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-08T22:54:58.413 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-08T22:54:58.414 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-08T22:54:58.436 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-08T22:54:58.438 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-08T22:54:58.443 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-08T22:54:58.444 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-08T22:54:58.444 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T22:54:58.445 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T22:54:58.462 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-websocket. 2026-03-08T22:54:58.465 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-websocket. 2026-03-08T22:54:58.468 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-08T22:54:58.469 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-08T22:54:58.472 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-08T22:54:58.473 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-08T22:54:58.490 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-08T22:54:58.495 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-08T22:54:58.498 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-08T22:54:58.504 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-08T22:54:58.513 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T22:54:58.523 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T22:54:58.715 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-08T22:54:58.718 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-08T22:54:58.722 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:58.723 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:58.724 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:54:58.725 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:58.740 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-08T22:54:58.742 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-08T22:54:58.745 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-08T22:54:58.746 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T22:54:58.749 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-08T22:54:58.749 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T22:54:58.769 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-08T22:54:58.772 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-08T22:54:58.772 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T22:54:58.773 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T22:54:58.778 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T22:54:58.779 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T22:54:58.789 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package jq. 2026-03-08T22:54:58.793 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T22:54:58.794 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-08T22:54:58.801 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package jq. 2026-03-08T22:54:58.807 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-08T22:54:58.808 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-08T22:54:58.813 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package socat. 2026-03-08T22:54:58.818 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-08T22:54:58.819 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-08T22:54:58.823 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package socat. 2026-03-08T22:54:58.828 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-08T22:54:58.831 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-08T22:54:58.847 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package xmlstarlet. 2026-03-08T22:54:58.853 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-08T22:54:58.854 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-08T22:54:58.862 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package xmlstarlet. 2026-03-08T22:54:58.868 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-08T22:54:58.869 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-08T22:54:58.910 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-test. 2026-03-08T22:54:58.915 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:58.916 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-test. 2026-03-08T22:54:58.916 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:54:58.922 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:54:58.923 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:00.358 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-volume. 2026-03-08T22:55:00.364 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:55:00.365 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:00.368 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package ceph-volume. 2026-03-08T22:55:00.374 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-08T22:55:00.376 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:00.407 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-08T22:55:00.413 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:55:00.417 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:00.420 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-08T22:55:00.425 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:55:00.426 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:00.453 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-08T22:55:00.459 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-08T22:55:00.460 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T22:55:00.467 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-08T22:55:00.472 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-08T22:55:00.477 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T22:55:00.521 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-08T22:55:00.527 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-08T22:55:00.531 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-08T22:55:00.535 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-08T22:55:00.542 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-08T22:55:00.551 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-08T22:55:00.630 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package nvme-cli. 2026-03-08T22:55:00.636 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-08T22:55:00.646 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T22:55:00.647 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package nvme-cli. 2026-03-08T22:55:00.654 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-08T22:55:00.664 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T22:55:00.758 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package pkg-config. 2026-03-08T22:55:00.764 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-08T22:55:00.767 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T22:55:00.782 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package pkg-config. 2026-03-08T22:55:00.788 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-08T22:55:00.789 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T22:55:00.836 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-08T22:55:00.841 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T22:55:00.849 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T22:55:00.850 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-08T22:55:00.856 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-08T22:55:00.863 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T22:55:00.970 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-08T22:55:00.972 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-08T22:55:00.974 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-08T22:55:00.975 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-08T22:55:00.982 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-08T22:55:00.984 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-08T22:55:01.071 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pastescript. 2026-03-08T22:55:01.077 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-08T22:55:01.085 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-08T22:55:01.088 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pastescript. 2026-03-08T22:55:01.092 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-08T22:55:01.101 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-08T22:55:01.177 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pluggy. 2026-03-08T22:55:01.182 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-08T22:55:01.188 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-08T22:55:01.204 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pluggy. 2026-03-08T22:55:01.211 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-08T22:55:01.217 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-08T22:55:01.285 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-psutil. 2026-03-08T22:55:01.292 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-08T22:55:01.304 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-08T22:55:01.304 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-psutil. 2026-03-08T22:55:01.310 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-08T22:55:01.320 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-08T22:55:01.390 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-py. 2026-03-08T22:55:01.398 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-08T22:55:01.404 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-08T22:55:01.405 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-py. 2026-03-08T22:55:01.411 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-08T22:55:01.427 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-08T22:55:01.500 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pygments. 2026-03-08T22:55:01.506 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-08T22:55:01.514 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T22:55:01.515 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pygments. 2026-03-08T22:55:01.521 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-08T22:55:01.529 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T22:55:01.639 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-08T22:55:01.645 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-08T22:55:01.652 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-08T22:55:01.666 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-08T22:55:01.672 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-08T22:55:01.681 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-08T22:55:01.753 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-toml. 2026-03-08T22:55:01.753 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-toml. 2026-03-08T22:55:01.759 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-08T22:55:01.761 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-08T22:55:01.767 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-08T22:55:01.768 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-08T22:55:01.847 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pytest. 2026-03-08T22:55:01.853 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-08T22:55:01.855 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-pytest. 2026-03-08T22:55:01.861 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-08T22:55:01.861 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T22:55:01.862 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T22:55:01.946 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-simplejson. 2026-03-08T22:55:01.952 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-08T22:55:01.960 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-08T22:55:01.960 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-simplejson. 2026-03-08T22:55:01.966 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-08T22:55:01.975 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-08T22:55:02.038 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-08T22:55:02.045 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-08T22:55:02.048 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-08T22:55:02.054 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-08T22:55:02.057 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-08T22:55:02.058 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-08T22:55:02.266 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package radosgw. 2026-03-08T22:55:02.272 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package radosgw. 2026-03-08T22:55:02.273 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:55:02.278 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:02.279 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:55:02.287 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:02.671 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package rbd-fuse. 2026-03-08T22:55:02.677 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:55:02.678 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:02.679 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package rbd-fuse. 2026-03-08T22:55:02.686 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-08T22:55:02.687 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:02.712 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package smartmontools. 2026-03-08T22:55:02.719 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-08T22:55:02.728 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T22:55:02.729 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package smartmontools. 2026-03-08T22:55:02.736 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-08T22:55:02.743 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T22:55:02.807 INFO:teuthology.orchestra.run.vm06.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T22:55:02.825 INFO:teuthology.orchestra.run.vm11.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T22:55:03.055 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-08T22:55:03.055 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-08T22:55:03.083 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-08T22:55:03.083 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-08T22:55:03.455 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-08T22:55:03.480 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-08T22:55:03.541 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T22:55:03.566 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T22:55:03.566 INFO:teuthology.orchestra.run.vm06.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T22:55:03.605 INFO:teuthology.orchestra.run.vm11.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T22:55:03.661 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-08T22:55:03.695 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-08T22:55:03.935 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-08T22:55:03.949 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-08T22:55:04.301 INFO:teuthology.orchestra.run.vm06.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-08T22:55:04.308 INFO:teuthology.orchestra.run.vm06.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T22:55:04.312 INFO:teuthology.orchestra.run.vm06.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:04.350 INFO:teuthology.orchestra.run.vm11.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-08T22:55:04.357 INFO:teuthology.orchestra.run.vm11.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-08T22:55:04.360 INFO:teuthology.orchestra.run.vm11.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:04.379 INFO:teuthology.orchestra.run.vm06.stdout:Adding system user cephadm....done 2026-03-08T22:55:04.402 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T22:55:04.437 INFO:teuthology.orchestra.run.vm11.stdout:Adding system user cephadm....done 2026-03-08T22:55:04.459 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T22:55:04.490 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-08T22:55:04.542 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-08T22:55:04.566 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T22:55:04.574 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-08T22:55:04.617 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T22:55:04.619 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-08T22:55:04.652 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-08T22:55:04.692 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-08T22:55:04.730 INFO:teuthology.orchestra.run.vm06.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T22:55:04.738 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-08T22:55:04.772 INFO:teuthology.orchestra.run.vm11.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T22:55:04.787 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-08T22:55:04.842 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T22:55:04.896 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T22:55:04.976 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-08T22:55:05.036 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-08T22:55:05.052 INFO:teuthology.orchestra.run.vm06.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-08T22:55:05.071 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-08T22:55:05.121 INFO:teuthology.orchestra.run.vm11.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-08T22:55:05.137 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-08T22:55:05.161 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-08T22:55:05.220 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-08T22:55:05.237 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:05.300 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:05.332 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T22:55:05.335 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-08T22:55:05.351 INFO:teuthology.orchestra.run.vm06.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T22:55:05.366 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T22:55:05.389 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T22:55:05.390 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T22:55:05.399 INFO:teuthology.orchestra.run.vm06.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-08T22:55:05.405 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-08T22:55:05.409 INFO:teuthology.orchestra.run.vm06.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-08T22:55:05.415 INFO:teuthology.orchestra.run.vm06.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-08T22:55:05.415 INFO:teuthology.orchestra.run.vm11.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T22:55:05.422 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T22:55:05.429 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T22:55:05.437 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-08T22:55:05.447 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T22:55:05.457 INFO:teuthology.orchestra.run.vm11.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-08T22:55:05.468 INFO:teuthology.orchestra.run.vm11.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-08T22:55:05.474 INFO:teuthology.orchestra.run.vm11.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-08T22:55:05.483 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T22:55:05.497 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-08T22:55:05.583 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-08T22:55:05.637 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-08T22:55:05.669 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T22:55:05.718 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T22:55:05.750 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-08T22:55:05.797 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-08T22:55:05.840 INFO:teuthology.orchestra.run.vm06.stdout:Setting up zip (3.0-12build2) ... 2026-03-08T22:55:05.843 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T22:55:05.880 INFO:teuthology.orchestra.run.vm11.stdout:Setting up zip (3.0-12build2) ... 2026-03-08T22:55:05.883 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-08T22:55:06.141 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T22:55:06.181 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T22:55:06.218 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T22:55:06.223 INFO:teuthology.orchestra.run.vm06.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-08T22:55:06.230 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T22:55:06.267 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T22:55:06.275 INFO:teuthology.orchestra.run.vm11.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-08T22:55:06.279 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T22:55:06.343 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T22:55:06.391 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T22:55:06.503 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T22:55:06.553 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T22:55:06.652 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T22:55:06.709 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T22:55:06.761 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T22:55:06.820 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T22:55:06.903 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-08T22:55:06.961 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-08T22:55:06.983 INFO:teuthology.orchestra.run.vm06.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-08T22:55:06.995 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:07.039 INFO:teuthology.orchestra.run.vm11.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-08T22:55:07.049 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:07.104 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T22:55:07.153 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T22:55:07.732 INFO:teuthology.orchestra.run.vm06.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T22:55:07.755 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:55:07.762 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-08T22:55:07.793 INFO:teuthology.orchestra.run.vm11.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T22:55:07.818 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:55:07.823 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-08T22:55:07.869 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T22:55:07.871 INFO:teuthology.orchestra.run.vm06.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-08T22:55:07.873 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-08T22:55:07.892 INFO:teuthology.orchestra.run.vm11.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T22:55:07.894 INFO:teuthology.orchestra.run.vm11.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-08T22:55:07.897 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-08T22:55:07.948 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-08T22:55:07.962 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-08T22:55:08.068 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:55:08.068 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:55:08.090 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-08T22:55:08.091 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-08T22:55:08.158 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-08T22:55:08.170 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-08T22:55:08.227 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-08T22:55:08.239 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-08T22:55:08.302 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-08T22:55:08.317 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-08T22:55:08.387 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-08T22:55:08.396 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-08T22:55:08.461 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-08T22:55:08.465 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-08T22:55:08.536 INFO:teuthology.orchestra.run.vm06.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T22:55:08.538 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-08T22:55:08.540 INFO:teuthology.orchestra.run.vm11.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T22:55:08.543 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-08T22:55:08.625 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T22:55:08.626 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T22:55:08.628 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T22:55:08.628 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T22:55:08.705 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T22:55:08.707 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T22:55:08.797 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T22:55:08.797 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T22:55:08.898 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-08T22:55:08.900 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-08T22:55:08.970 INFO:teuthology.orchestra.run.vm06.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T22:55:08.973 INFO:teuthology.orchestra.run.vm06.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-08T22:55:08.975 INFO:teuthology.orchestra.run.vm11.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T22:55:08.975 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T22:55:08.978 INFO:teuthology.orchestra.run.vm11.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-08T22:55:08.978 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T22:55:08.981 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T22:55:08.985 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-08T22:55:09.138 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-08T22:55:09.141 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-08T22:55:09.208 INFO:teuthology.orchestra.run.vm11.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-08T22:55:09.211 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-08T22:55:09.216 INFO:teuthology.orchestra.run.vm06.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-08T22:55:09.219 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-08T22:55:09.283 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:55:09.286 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-08T22:55:09.289 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T22:55:09.292 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-08T22:55:09.376 INFO:teuthology.orchestra.run.vm06.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-08T22:55:09.377 INFO:teuthology.orchestra.run.vm11.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-08T22:55:09.378 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-08T22:55:09.380 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-08T22:55:09.457 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-08T22:55:09.464 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-08T22:55:09.605 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-08T22:55:09.618 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-08T22:55:09.700 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T22:55:09.715 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T22:55:09.838 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T22:55:09.845 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:09.853 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T22:55:09.861 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:09.862 INFO:teuthology.orchestra.run.vm11.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:09.874 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T22:55:09.876 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:09.887 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T22:55:10.523 INFO:teuthology.orchestra.run.vm11.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-08T22:55:10.536 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.536 INFO:teuthology.orchestra.run.vm06.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-08T22:55:10.542 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.546 INFO:teuthology.orchestra.run.vm11.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.550 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.552 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.553 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.556 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.556 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.558 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.561 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:10.629 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T22:55:10.629 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T22:55:10.640 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T22:55:10.641 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-08T22:55:11.006 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.006 INFO:teuthology.orchestra.run.vm11.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.009 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.009 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.011 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.012 INFO:teuthology.orchestra.run.vm11.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.014 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.015 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.017 INFO:teuthology.orchestra.run.vm06.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.018 INFO:teuthology.orchestra.run.vm11.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.019 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.021 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.022 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.024 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.024 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.027 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.062 INFO:teuthology.orchestra.run.vm11.stdout:Adding group ceph....done 2026-03-08T22:55:11.062 INFO:teuthology.orchestra.run.vm06.stdout:Adding group ceph....done 2026-03-08T22:55:11.098 INFO:teuthology.orchestra.run.vm11.stdout:Adding system user ceph....done 2026-03-08T22:55:11.101 INFO:teuthology.orchestra.run.vm06.stdout:Adding system user ceph....done 2026-03-08T22:55:11.108 INFO:teuthology.orchestra.run.vm11.stdout:Setting system user ceph properties....done 2026-03-08T22:55:11.111 INFO:teuthology.orchestra.run.vm06.stdout:Setting system user ceph properties....done 2026-03-08T22:55:11.112 INFO:teuthology.orchestra.run.vm11.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-08T22:55:11.115 INFO:teuthology.orchestra.run.vm06.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-08T22:55:11.177 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-08T22:55:11.179 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-08T22:55:11.423 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-08T22:55:11.431 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-08T22:55:11.804 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.807 INFO:teuthology.orchestra.run.vm11.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.833 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:11.835 INFO:teuthology.orchestra.run.vm06.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:12.082 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T22:55:12.082 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T22:55:12.111 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T22:55:12.111 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-08T22:55:12.476 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:12.520 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:12.561 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-08T22:55:12.610 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-08T22:55:12.971 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:13.000 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:13.041 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T22:55:13.041 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T22:55:13.070 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T22:55:13.070 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-08T22:55:13.435 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:13.477 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:13.500 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T22:55:13.500 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T22:55:13.549 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T22:55:13.549 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-08T22:55:13.902 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:13.935 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:13.981 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T22:55:13.981 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T22:55:14.017 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T22:55:14.017 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-08T22:55:14.389 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.391 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.405 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.410 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.413 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.428 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.465 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T22:55:14.465 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T22:55:14.497 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T22:55:14.497 INFO:teuthology.orchestra.run.vm11.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-08T22:55:14.804 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.826 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.833 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.853 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.874 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.899 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.907 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.933 INFO:teuthology.orchestra.run.vm11.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T22:55:14.989 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T22:55:15.003 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T22:55:15.038 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T22:55:15.073 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T22:55:15.093 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T22:55:15.119 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T22:55:15.194 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-08T22:55:15.276 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-08T22:55:15.666 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:15.666 INFO:teuthology.orchestra.run.vm06.stdout:Running kernel seems to be up-to-date. 2026-03-08T22:55:15.666 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:15.666 INFO:teuthology.orchestra.run.vm06.stdout:Services to be restarted: 2026-03-08T22:55:15.670 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart packagekit.service 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout:Service restarts being deferred: 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart unattended-upgrades.service 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout:No containers need to be restarted. 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout:No user sessions are running outdated binaries. 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:15.675 INFO:teuthology.orchestra.run.vm06.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T22:55:15.733 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:15.733 INFO:teuthology.orchestra.run.vm11.stdout:Running kernel seems to be up-to-date. 2026-03-08T22:55:15.733 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:15.733 INFO:teuthology.orchestra.run.vm11.stdout:Services to be restarted: 2026-03-08T22:55:15.735 INFO:teuthology.orchestra.run.vm11.stdout: systemctl restart packagekit.service 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout:Service restarts being deferred: 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout: systemctl restart unattended-upgrades.service 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout:No containers need to be restarted. 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout:No user sessions are running outdated binaries. 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:15.738 INFO:teuthology.orchestra.run.vm11.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T22:55:16.783 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T22:55:16.788 DEBUG:teuthology.orchestra.run.vm06:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-08T22:55:16.858 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T22:55:16.861 DEBUG:teuthology.orchestra.run.vm11:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-08T22:55:16.866 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T22:55:16.942 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T22:55:17.053 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T22:55:17.053 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T22:55:17.129 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T22:55:17.129 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T22:55:17.298 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T22:55:17.298 INFO:teuthology.orchestra.run.vm11.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T22:55:17.298 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T22:55:17.298 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T22:55:17.301 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T22:55:17.301 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T22:55:17.302 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T22:55:17.302 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T22:55:17.308 INFO:teuthology.orchestra.run.vm11.stdout:The following NEW packages will be installed: 2026-03-08T22:55:17.308 INFO:teuthology.orchestra.run.vm11.stdout: python3-jmespath python3-xmltodict 2026-03-08T22:55:17.321 INFO:teuthology.orchestra.run.vm06.stdout:The following NEW packages will be installed: 2026-03-08T22:55:17.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath python3-xmltodict 2026-03-08T22:55:17.504 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T22:55:17.504 INFO:teuthology.orchestra.run.vm11.stdout:Need to get 34.3 kB of archives. 2026-03-08T22:55:17.504 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-08T22:55:17.504 INFO:teuthology.orchestra.run.vm11.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-08T22:55:17.529 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T22:55:17.529 INFO:teuthology.orchestra.run.vm06.stdout:Need to get 34.3 kB of archives. 2026-03-08T22:55:17.529 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-08T22:55:17.529 INFO:teuthology.orchestra.run.vm06.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-08T22:55:17.582 INFO:teuthology.orchestra.run.vm11.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-08T22:55:17.609 INFO:teuthology.orchestra.run.vm06.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-08T22:55:17.817 INFO:teuthology.orchestra.run.vm11.stdout:Fetched 34.3 kB in 0s (125 kB/s) 2026-03-08T22:55:17.818 INFO:teuthology.orchestra.run.vm06.stdout:Fetched 34.3 kB in 0s (120 kB/s) 2026-03-08T22:55:18.562 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-jmespath. 2026-03-08T22:55:18.563 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jmespath. 2026-03-08T22:55:18.590 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-08T22:55:18.592 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-08T22:55:18.593 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-08T22:55:18.593 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-08T22:55:18.595 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-08T22:55:18.596 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-08T22:55:18.609 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-08T22:55:18.613 INFO:teuthology.orchestra.run.vm11.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-08T22:55:18.613 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-08T22:55:18.614 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-08T22:55:18.619 INFO:teuthology.orchestra.run.vm11.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-08T22:55:18.620 INFO:teuthology.orchestra.run.vm11.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-08T22:55:18.639 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-08T22:55:18.651 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-08T22:55:18.708 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-08T22:55:18.723 INFO:teuthology.orchestra.run.vm11.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-08T22:55:19.134 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:19.134 INFO:teuthology.orchestra.run.vm06.stdout:Running kernel seems to be up-to-date. 2026-03-08T22:55:19.134 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:19.134 INFO:teuthology.orchestra.run.vm06.stdout:Services to be restarted: 2026-03-08T22:55:19.135 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:19.135 INFO:teuthology.orchestra.run.vm11.stdout:Running kernel seems to be up-to-date. 2026-03-08T22:55:19.135 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:19.135 INFO:teuthology.orchestra.run.vm11.stdout:Services to be restarted: 2026-03-08T22:55:19.137 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart packagekit.service 2026-03-08T22:55:19.139 INFO:teuthology.orchestra.run.vm11.stdout: systemctl restart packagekit.service 2026-03-08T22:55:19.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:19.140 INFO:teuthology.orchestra.run.vm06.stdout:Service restarts being deferred: 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart unattended-upgrades.service 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout:No containers need to be restarted. 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout:No user sessions are running outdated binaries. 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:55:19.141 INFO:teuthology.orchestra.run.vm06.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout:Service restarts being deferred: 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout: systemctl restart unattended-upgrades.service 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout:No containers need to be restarted. 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout:No user sessions are running outdated binaries. 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:55:19.143 INFO:teuthology.orchestra.run.vm11.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-08T22:55:20.128 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T22:55:20.132 DEBUG:teuthology.parallel:result is None 2026-03-08T22:55:20.144 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T22:55:20.148 DEBUG:teuthology.parallel:result is None 2026-03-08T22:55:20.148 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:55:20.780 DEBUG:teuthology.orchestra.run.vm06:> dpkg-query -W -f '${Version}' ceph 2026-03-08T22:55:20.791 INFO:teuthology.orchestra.run.vm06.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:55:20.792 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:55:20.792 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-08T22:55:20.793 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:55:21.420 DEBUG:teuthology.orchestra.run.vm11:> dpkg-query -W -f '${Version}' ceph 2026-03-08T22:55:21.429 INFO:teuthology.orchestra.run.vm11.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:55:21.432 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-08T22:55:21.432 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-08T22:55:21.433 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-08T22:55:21.434 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:55:21.434 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-08T22:55:21.441 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:55:21.441 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-08T22:55:21.478 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-08T22:55:21.479 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:55:21.479 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/daemon-helper 2026-03-08T22:55:21.492 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-08T22:55:21.542 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:55:21.542 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/usr/bin/daemon-helper 2026-03-08T22:55:21.552 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-08T22:55:21.603 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-08T22:55:21.604 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:55:21.604 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-08T22:55:21.612 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-08T22:55:21.661 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:55:21.689 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-08T22:55:21.697 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-08T22:55:21.745 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-08T22:55:21.746 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:55:21.746 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/stdin-killer 2026-03-08T22:55:21.754 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-08T22:55:21.805 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:55:21.805 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/usr/bin/stdin-killer 2026-03-08T22:55:21.813 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-08T22:55:21.862 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-08T22:55:21.910 INFO:tasks.cephadm:Config: {'conf': {'mgr': {'debug mgr': 20, 'debug ms': 1}, 'global': {'mon election default strategy': 3, 'ms bind msgr1': False, 'ms bind msgr2': True, 'ms type': 'async'}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000, 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'cephadm_mode': 'root'} 2026-03-08T22:55:21.910 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:55:21.911 INFO:tasks.cephadm:Cluster fsid is e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:55:21.911 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-08T22:55:21.911 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.106', 'mon.c': '[v2:192.168.123.106:3301,v1:192.168.123.106:6790]', 'mon.b': '192.168.123.111'} 2026-03-08T22:55:21.911 INFO:tasks.cephadm:First mon is mon.a on vm06 2026-03-08T22:55:21.911 INFO:tasks.cephadm:First mgr is y 2026-03-08T22:55:21.911 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-08T22:55:21.911 DEBUG:teuthology.orchestra.run.vm06:> sudo hostname $(hostname -s) 2026-03-08T22:55:21.922 DEBUG:teuthology.orchestra.run.vm11:> sudo hostname $(hostname -s) 2026-03-08T22:55:21.932 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-08T22:55:21.932 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:55:22.540 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-08T22:55:23.184 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:55:23.188 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-08T22:55:23.188 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-08T22:55:23.188 DEBUG:teuthology.orchestra.run.vm06:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-08T22:55:24.580 INFO:teuthology.orchestra.run.vm06.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 8 22:55 /home/ubuntu/cephtest/cephadm 2026-03-08T22:55:24.581 DEBUG:teuthology.orchestra.run.vm11:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-08T22:55:25.955 INFO:teuthology.orchestra.run.vm11.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 8 22:55 /home/ubuntu/cephtest/cephadm 2026-03-08T22:55:25.955 DEBUG:teuthology.orchestra.run.vm06:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-08T22:55:25.959 DEBUG:teuthology.orchestra.run.vm11:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-08T22:55:25.968 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-08T22:55:25.968 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-08T22:55:26.001 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-08T22:55:26.096 INFO:teuthology.orchestra.run.vm06.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T22:55:26.101 INFO:teuthology.orchestra.run.vm11.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout:{ 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout: "repo_digests": [ 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout: ] 2026-03-08T22:56:30.339 INFO:teuthology.orchestra.run.vm11.stdout:} 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout:{ 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout: "repo_digests": [ 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout: ] 2026-03-08T22:56:46.717 INFO:teuthology.orchestra.run.vm06.stdout:} 2026-03-08T22:56:46.733 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /etc/ceph 2026-03-08T22:56:46.742 DEBUG:teuthology.orchestra.run.vm11:> sudo mkdir -p /etc/ceph 2026-03-08T22:56:46.756 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 777 /etc/ceph 2026-03-08T22:56:46.793 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod 777 /etc/ceph 2026-03-08T22:56:46.816 INFO:tasks.cephadm:Writing seed config... 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [global] mon election default strategy = 3 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [global] ms bind msgr1 = False 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [global] ms bind msgr2 = True 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [global] ms type = async 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-08T22:56:46.817 INFO:tasks.cephadm: override: [osd] osd shutdown pgref assert = True 2026-03-08T22:56:46.817 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:56:46.817 DEBUG:teuthology.orchestra.run.vm06:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-08T22:56:46.840 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = e2eb96e6-1b41-11f1-83e5-75f1b5373d30 mon election default strategy = 3 ms bind msgr1 = False ms bind msgr2 = True ms type = async [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = True bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-08T22:56:46.840 DEBUG:teuthology.orchestra.run.vm06:mon.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a.service 2026-03-08T22:56:46.882 DEBUG:teuthology.orchestra.run.vm06:mgr.y> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service 2026-03-08T22:56:46.925 INFO:tasks.cephadm:Bootstrapping... 2026-03-08T22:56:46.925 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id y --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.106 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-08T22:56:47.059 INFO:teuthology.orchestra.run.vm06.stdout:-------------------------------------------------------------------------------- 2026-03-08T22:56:47.059 INFO:teuthology.orchestra.run.vm06.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'e2eb96e6-1b41-11f1-83e5-75f1b5373d30', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'y', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.106', '--skip-admin-label'] 2026-03-08T22:56:47.059 INFO:teuthology.orchestra.run.vm06.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-08T22:56:47.059 INFO:teuthology.orchestra.run.vm06.stdout:Verifying podman|docker is present... 2026-03-08T22:56:47.059 INFO:teuthology.orchestra.run.vm06.stdout:Verifying lvm2 is present... 2026-03-08T22:56:47.059 INFO:teuthology.orchestra.run.vm06.stdout:Verifying time synchronization is in place... 2026-03-08T22:56:47.063 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-08T22:56:47.063 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-08T22:56:47.066 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-08T22:56:47.066 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.068 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-08T22:56:47.068 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-08T22:56:47.071 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-08T22:56:47.071 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.073 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-08T22:56:47.073 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout masked 2026-03-08T22:56:47.075 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-08T22:56:47.075 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.078 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-08T22:56:47.078 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-08T22:56:47.081 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-08T22:56:47.081 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.083 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout enabled 2026-03-08T22:56:47.086 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout active 2026-03-08T22:56:47.086 INFO:teuthology.orchestra.run.vm06.stdout:Unit ntp.service is enabled and running 2026-03-08T22:56:47.086 INFO:teuthology.orchestra.run.vm06.stdout:Repeating the final host check... 2026-03-08T22:56:47.086 INFO:teuthology.orchestra.run.vm06.stdout:docker (/usr/bin/docker) is present 2026-03-08T22:56:47.086 INFO:teuthology.orchestra.run.vm06.stdout:systemctl is present 2026-03-08T22:56:47.086 INFO:teuthology.orchestra.run.vm06.stdout:lvcreate is present 2026-03-08T22:56:47.088 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-08T22:56:47.088 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-08T22:56:47.091 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-08T22:56:47.091 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.093 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-08T22:56:47.093 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-08T22:56:47.095 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-08T22:56:47.095 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.098 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-08T22:56:47.098 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout masked 2026-03-08T22:56:47.100 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-08T22:56:47.100 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.102 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-08T22:56:47.102 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-08T22:56:47.105 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-08T22:56:47.105 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout inactive 2026-03-08T22:56:47.107 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout enabled 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stdout active 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Unit ntp.service is enabled and running 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Host looks OK 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Cluster fsid: e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Acquiring lock 140321550076464 on /run/cephadm/e2eb96e6-1b41-11f1-83e5-75f1b5373d30.lock 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Lock 140321550076464 acquired on /run/cephadm/e2eb96e6-1b41-11f1-83e5-75f1b5373d30.lock 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Verifying IP 192.168.123.106 port 3300 ... 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Verifying IP 192.168.123.106 port 6789 ... 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Base mon IP(s) is [192.168.123.106:3300, 192.168.123.106:6789], mon addrv is [v2:192.168.123.106:3300,v1:192.168.123.106:6789] 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.106 metric 100 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.106 metric 100 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.106 metric 100 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:6/64 scope link 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.0/24` 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.0/24` 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.1/32` 2026-03-08T22:56:47.135 INFO:teuthology.orchestra.run.vm06.stdout:Mon IP `192.168.123.106` is in CIDR network `192.168.123.1/32` 2026-03-08T22:56:47.136 INFO:teuthology.orchestra.run.vm06.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-08T22:56:47.136 INFO:teuthology.orchestra.run.vm06.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-08T22:56:47.136 INFO:teuthology.orchestra.run.vm06.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-08T22:56:48.240 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-08T22:56:48.240 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-08T22:56:48.240 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:56:48.240 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:56:48.387 INFO:teuthology.orchestra.run.vm06.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-08T22:56:48.387 INFO:teuthology.orchestra.run.vm06.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-08T22:56:48.387 INFO:teuthology.orchestra.run.vm06.stdout:Extracting ceph user uid/gid from container image... 2026-03-08T22:56:48.477 INFO:teuthology.orchestra.run.vm06.stdout:stat: stdout 167 167 2026-03-08T22:56:48.477 INFO:teuthology.orchestra.run.vm06.stdout:Creating initial keys... 2026-03-08T22:56:48.574 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-authtool: stdout AQCw/q1pdB/bIBAAhVed1ssYKn0lIt7CKReD1Q== 2026-03-08T22:56:48.669 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-authtool: stdout AQCw/q1pDo6NJhAAZu3/VajLNDWFIq+V8/NLgA== 2026-03-08T22:56:48.768 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-authtool: stdout AQCw/q1ptko8LBAAbRDcBdFGfF7luVs55STIDw== 2026-03-08T22:56:48.768 INFO:teuthology.orchestra.run.vm06.stdout:Creating initial monmap... 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:monmaptool for a [v2:192.168.123.106:3300,v1:192.168.123.106:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:setting min_mon_release = quincy 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: set fsid to e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:56:48.878 INFO:teuthology.orchestra.run.vm06.stdout:Creating mon... 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.949+0000 7fe9dad51d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 1 imported monmap: 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-08T22:56:48.853084+0000 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 0 /usr/bin/ceph-mon: set fsid to e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Git sha 0 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: DB SUMMARY 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: DB Session ID: NZGXEU1CNI9N7GYBTNHK 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.002 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.error_if_exists: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.create_if_missing: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.env: 0x55c2a00f4dc0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.info_log: 0x55c2b4644da0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.statistics: (nil) 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.use_fsync: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.db_log_dir: 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.wal_dir: 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.write_buffer_manager: 0x55c2b463b5e0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.unordered_write: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.row_cache: None 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.wal_filter: None 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.two_write_queues: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.wal_compression: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.atomic_flush: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-08T22:56:49.003 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_open_files: -1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Compression algorithms supported: 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kZSTD supported: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kXpressCompression supported: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kZlibCompression supported: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.953+0000 7fe9dad51d80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.merge_operator: 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_filter: None 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c2b4637520) 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55c2b465d350 2026-03-08T22:56:49.004 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression: NoCompression 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.num_levels: 7 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-08T22:56:49.005 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.bloom_locality: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.ttl: 2592000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.enable_blob_files: false 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.min_blob_size: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 477dcbe6-0092-49ac-9529-a5b4e78369e1 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.957+0000 7fe9dad51d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9dad51d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c2b465ee00 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9dad51d80 4 rocksdb: DB pointer 0x55c2b4742000 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9d24db640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9d24db640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-08T22:56:49.006 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55c2b465d350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6e-06 secs_since: 0 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9dad51d80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9dad51d80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-08T22:56:48.961+0000 7fe9dad51d80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-08T22:56:49.007 INFO:teuthology.orchestra.run.vm06.stdout:create mon.a on 2026-03-08T22:56:49.179 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-08T22:56:49.333 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-08T22:56:49.500 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30.target → /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30.target. 2026-03-08T22:56:49.500 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30.target → /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30.target. 2026-03-08T22:56:49.698 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a 2026-03-08T22:56:49.699 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to reset failed state of unit ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a.service: Unit ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a.service not loaded. 2026-03-08T22:56:49.902 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30.target.wants/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a.service → /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service. 2026-03-08T22:56:49.911 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-08T22:56:49.911 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to enable service . firewalld.service is not available 2026-03-08T22:56:49.911 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mon to start... 2026-03-08T22:56:49.911 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mon... 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout cluster: 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout id: e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout services: 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.0577203s) 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout data: 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout pgs: 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:mon is available 2026-03-08T22:56:50.132 INFO:teuthology.orchestra.run.vm06.stdout:Assimilating anything we can from ceph.conf... 2026-03-08T22:56:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20144]: cluster 2026-03-08T22:56:50.037725+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:56:50.331 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [global] 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout fsid = e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.106:3300,v1:192.168.123.106:6789] 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [osd] 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-08T22:56:50.332 INFO:teuthology.orchestra.run.vm06.stdout:Generating new minimal ceph.conf... 2026-03-08T22:56:50.517 INFO:teuthology.orchestra.run.vm06.stdout:Restarting the monitor... 2026-03-08T22:56:50.554 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 systemd[1]: Stopping Ceph mon.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T22:56:50.751 INFO:teuthology.orchestra.run.vm06.stdout:Setting public_network to 192.168.123.0/24,192.168.123.1/32 in mon config section 2026-03-08T22:56:50.848 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20144]: debug 2026-03-08T22:56:50.549+0000 7ff89fb3c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T22:56:50.848 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20144]: debug 2026-03-08T22:56:50.549+0000 7ff89fb3c640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-08T22:56:50.848 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20528]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-mon-a 2026-03-08T22:56:50.848 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a.service: Deactivated successfully. 2026-03-08T22:56:50.848 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 systemd[1]: Stopped Ceph mon.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T22:56:50.848 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 systemd[1]: Started Ceph mon.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.877+0000 7fd4aa952d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.877+0000 7fd4aa952d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.877+0000 7fd4aa952d80 0 pidfile_write: ignore empty --pid-file 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 0 load: jerasure load: lrc 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Git sha 0 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: DB SUMMARY 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: DB Session ID: BAW1OEVCU1F64HQPBC6U 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: CURRENT file: CURRENT 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 76789 ; 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.error_if_exists: 0 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.create_if_missing: 0 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.env: 0x55e59df09dc0 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-08T22:56:51.133 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.info_log: 0x55e59f3b8d00 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.statistics: (nil) 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.use_fsync: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.db_log_dir: 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.wal_dir: 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.write_buffer_manager: 0x55e59f3bd900 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.unordered_write: 0 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-08T22:56:51.134 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.row_cache: None 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.wal_filter: None 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.two_write_queues: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.wal_compression: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.atomic_flush: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.881+0000 7fd4aa952d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_open_files: -1 2026-03-08T22:56:51.135 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Compression algorithms supported: 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kZSTD supported: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kXpressCompression supported: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kZlibCompression supported: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.merge_operator: 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compaction_filter: None 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e59f3b8480) 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cache_index_and_filter_blocks: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: pin_top_level_index_and_filter: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: index_type: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: data_block_index_type: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: index_shortening: 1 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: data_block_hash_table_util_ratio: 0.750000 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: checksum: 4 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: no_block_cache: 0 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_cache: 0x55e59f3df350 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_cache_name: BinnedLRUCache 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_cache_options: 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: capacity : 536870912 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: num_shard_bits : 4 2026-03-08T22:56:51.136 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: strict_capacity_limit : 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: high_pri_pool_ratio: 0.000 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_cache_compressed: (nil) 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: persistent_cache: (nil) 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_size: 4096 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_size_deviation: 10 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_restart_interval: 16 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: index_block_restart_interval: 1 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: metadata_block_size: 4096 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: partition_filters: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: use_delta_encoding: 1 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: filter_policy: bloomfilter 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: whole_key_filtering: 1 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: verify_compression: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: read_amp_bytes_per_bit: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: format_version: 5 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: enable_index_compression: 1 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: block_align: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: max_auto_readahead_size: 262144 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: prepopulate_block_cache: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: initial_auto_readahead_size: 8192 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: num_file_reads_for_auto_readahead: 2 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression: NoCompression 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.num_levels: 7 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-08T22:56:51.137 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.885+0000 7fd4aa952d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.bloom_locality: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-08T22:56:51.138 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.ttl: 2592000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.enable_blob_files: false 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.min_blob_size: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.889+0000 7fd4aa952d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 477dcbe6-0092-49ac-9529-a5b4e78369e1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.893+0000 7fd4aa952d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773010610897212, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.893+0000 7fd4aa952d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.893+0000 7fd4aa952d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773010610899167, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 73643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 231, "table_properties": {"data_size": 71922, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 10026, "raw_average_key_size": 49, "raw_value_size": 66337, "raw_average_value_size": 328, "num_data_blocks": 8, "num_entries": 202, "num_filter_entries": 202, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773010610, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "477dcbe6-0092-49ac-9529-a5b4e78369e1", "db_session_id": "BAW1OEVCU1F64HQPBC6U", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.893+0000 7fd4aa952d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773010610899409, "job": 1, "event": "recovery_finished"} 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.893+0000 7fd4aa952d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e59f3e0e00 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 4 rocksdb: DB pointer 0x55e59f4ec000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] at bind addrs [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 1 mon.a@-1(???) e1 preinit fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 mon.a@-1(???).mds e1 new map 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 mon.a@-1(???).mds e1 print_map 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: e1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: btime 2026-03-08T22:56:50:042781+0000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: legacy client fscid: -1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: No filesystems configured 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: debug 2026-03-08T22:56:50.897+0000 7fd4aa952d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907875+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907875+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907909+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907909+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907914+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907914+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907916+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T22:56:48.853084+0000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907916+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T22:56:48.853084+0000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907922+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907922+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907927+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907927+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907941+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907941+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907943+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.907943+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.908157+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.908157+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.908171+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.908171+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.908572+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T22:56:51.139 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:50 vm06 bash[20625]: cluster 2026-03-08T22:56:50.908572+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T22:56:51.159 INFO:teuthology.orchestra.run.vm06.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-08T22:56:51.160 INFO:teuthology.orchestra.run.vm06.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-08T22:56:51.160 INFO:teuthology.orchestra.run.vm06.stdout:Creating mgr... 2026-03-08T22:56:51.160 INFO:teuthology.orchestra.run.vm06.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-08T22:56:51.160 INFO:teuthology.orchestra.run.vm06.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-08T22:56:51.342 INFO:teuthology.orchestra.run.vm06.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y 2026-03-08T22:56:51.342 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Failed to reset failed state of unit ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service: Unit ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service not loaded. 2026-03-08T22:56:51.465 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:51 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:56:51.520 INFO:teuthology.orchestra.run.vm06.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30.target.wants/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service → /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service. 2026-03-08T22:56:51.531 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-08T22:56:51.531 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to enable service . firewalld.service is not available 2026-03-08T22:56:51.531 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-08T22:56:51.531 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-08T22:56:51.531 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr to start... 2026-03-08T22:56:51.531 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr... 2026-03-08T22:56:51.745 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:51 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsid": "e2eb96e6-1b41-11f1-83e5-75f1b5373d30", 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "health": { 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 0 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "a" 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.784 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "btime": "2026-03-08T22:56:50:042781+0000", 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "restful" 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modified": "2026-03-08T22:56:50.043482+0000", 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:56:51.785 INFO:teuthology.orchestra.run.vm06.stdout:mgr not available, waiting (1/15)... 2026-03-08T22:56:52.030 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:51 vm06 bash[20883]: debug 2026-03-08T22:56:51.921+0000 7f0b9c245140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T22:56:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:52 vm06 bash[20625]: audit 2026-03-08T22:56:51.124092+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.106:0/3759692065' entity='client.admin' 2026-03-08T22:56:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:52 vm06 bash[20625]: audit 2026-03-08T22:56:51.124092+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.106:0/3759692065' entity='client.admin' 2026-03-08T22:56:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:52 vm06 bash[20625]: audit 2026-03-08T22:56:51.732231+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.106:0/3683851812' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:56:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:52 vm06 bash[20625]: audit 2026-03-08T22:56:51.732231+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.106:0/3683851812' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:56:52.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: debug 2026-03-08T22:56:52.201+0000 7f0b9c245140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T22:56:53.018 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: debug 2026-03-08T22:56:52.653+0000 7f0b9c245140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T22:56:53.018 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: debug 2026-03-08T22:56:52.737+0000 7f0b9c245140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T22:56:53.018 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T22:56:53.018 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T22:56:53.018 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: from numpy import show_config as show_numpy_config 2026-03-08T22:56:53.019 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:52 vm06 bash[20883]: debug 2026-03-08T22:56:52.865+0000 7f0b9c245140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T22:56:53.280 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.013+0000 7f0b9c245140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T22:56:53.280 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.053+0000 7f0b9c245140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T22:56:53.280 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.089+0000 7f0b9c245140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T22:56:53.280 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.137+0000 7f0b9c245140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T22:56:53.280 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.193+0000 7f0b9c245140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T22:56:53.919 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.637+0000 7f0b9c245140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T22:56:53.919 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.677+0000 7f0b9c245140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T22:56:53.919 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.713+0000 7f0b9c245140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T22:56:53.919 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.869+0000 7f0b9c245140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T22:56:53.919 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.913+0000 7f0b9c245140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T22:56:54.190 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:53 vm06 bash[20883]: debug 2026-03-08T22:56:53.965+0000 7f0b9c245140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T22:56:54.191 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.089+0000 7f0b9c245140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:56:54.223 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:54.223 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsid": "e2eb96e6-1b41-11f1-83e5-75f1b5373d30", 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "health": { 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 0 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "a" 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-08T22:56:54.224 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "btime": "2026-03-08T22:56:50:042781+0000", 2026-03-08T22:56:54.225 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "restful" 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modified": "2026-03-08T22:56:50.043482+0000", 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:56:54.226 INFO:teuthology.orchestra.run.vm06.stdout:mgr not available, waiting (2/15)... 2026-03-08T22:56:54.456 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:54 vm06 bash[20625]: audit 2026-03-08T22:56:54.138601+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.106:0/4228632031' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:56:54.456 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:54 vm06 bash[20625]: audit 2026-03-08T22:56:54.138601+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.106:0/4228632031' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:56:54.456 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.265+0000 7f0b9c245140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T22:56:54.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.449+0000 7f0b9c245140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T22:56:54.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.489+0000 7f0b9c245140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T22:56:54.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.533+0000 7f0b9c245140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T22:56:54.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.681+0000 7f0b9c245140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:56:55.192 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:54 vm06 bash[20883]: debug 2026-03-08T22:56:54.905+0000 7f0b9c245140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: cluster 2026-03-08T22:56:54.912358+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: cluster 2026-03-08T22:56:54.912358+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: cluster 2026-03-08T22:56:54.920019+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00772146s) 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: cluster 2026-03-08T22:56:54.920019+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00772146s) 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.920994+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.920994+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.921104+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.921104+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.921186+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.921186+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.922157+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.922157+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.922244+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.922244+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: cluster 2026-03-08T22:56:54.929285+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-08T22:56:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: cluster 2026-03-08T22:56:54.929285+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.939623+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.939623+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.941884+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.941884+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.944463+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.944463+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.944871+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.944871+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.947494+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:56:55.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:55 vm06 bash[20625]: audit 2026-03-08T22:56:54.947494+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsid": "e2eb96e6-1b41-11f1-83e5-75f1b5373d30", 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "health": { 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 0 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "a" 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-08T22:56:56.590 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "btime": "2026-03-08T22:56:50:042781+0000", 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-08T22:56:56.591 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "restful" 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ], 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "modified": "2026-03-08T22:56:50.043482+0000", 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout }, 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:56:56.592 INFO:teuthology.orchestra.run.vm06.stdout:mgr is available 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [global] 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout fsid = e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.106:3300,v1:192.168.123.106:6789] 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout [osd] 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-08T22:56:56.844 INFO:teuthology.orchestra.run.vm06.stdout:Enabling cephadm module... 2026-03-08T22:56:57.246 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:56 vm06 bash[20625]: cluster 2026-03-08T22:56:55.926686+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01439s) 2026-03-08T22:56:57.247 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:56 vm06 bash[20625]: cluster 2026-03-08T22:56:55.926686+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01439s) 2026-03-08T22:56:57.247 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:56 vm06 bash[20625]: audit 2026-03-08T22:56:56.554358+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.106:0/716421922' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:56:57.247 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:56 vm06 bash[20625]: audit 2026-03-08T22:56:56.554358+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.106:0/716421922' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:56:57.247 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:56 vm06 bash[20625]: audit 2026-03-08T22:56:56.808872+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.106:0/2037649355' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T22:56:57.247 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:56 vm06 bash[20625]: audit 2026-03-08T22:56:56.808872+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.106:0/2037649355' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T22:56:58.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:57 vm06 bash[20625]: cluster 2026-03-08T22:56:56.926118+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:57 vm06 bash[20625]: cluster 2026-03-08T22:56:56.926118+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:57 vm06 bash[20625]: audit 2026-03-08T22:56:57.085640+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:57 vm06 bash[20625]: audit 2026-03-08T22:56:57.085640+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:57 vm06 bash[20883]: ignoring --setuser ceph since I am not root 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:57 vm06 bash[20883]: ignoring --setgroup ceph since I am not root 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:58 vm06 bash[20883]: debug 2026-03-08T22:56:58.061+0000 7fd076d4d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T22:56:58.234 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:58 vm06 bash[20883]: debug 2026-03-08T22:56:58.105+0000 7fd076d4d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 5, 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for the mgr to restart... 2026-03-08T22:56:58.289 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr epoch 5... 2026-03-08T22:56:58.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:58 vm06 bash[20883]: debug 2026-03-08T22:56:58.229+0000 7fd076d4d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T22:56:58.932 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:58 vm06 bash[20883]: debug 2026-03-08T22:56:58.573+0000 7fd076d4d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.029+0000 7fd076d4d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.109+0000 7fd076d4d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:58 vm06 bash[20625]: audit 2026-03-08T22:56:57.930339+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:58 vm06 bash[20625]: audit 2026-03-08T22:56:57.930339+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:58 vm06 bash[20625]: cluster 2026-03-08T22:56:57.933237+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:58 vm06 bash[20625]: cluster 2026-03-08T22:56:57.933237+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:58 vm06 bash[20625]: audit 2026-03-08T22:56:58.244704+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.106:0/663254225' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:56:59.227 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:56:58 vm06 bash[20625]: audit 2026-03-08T22:56:58.244704+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.106:0/663254225' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: from numpy import show_config as show_numpy_config 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.225+0000 7fd076d4d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.357+0000 7fd076d4d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.393+0000 7fd076d4d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T22:56:59.480 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.429+0000 7fd076d4d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T22:56:59.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.473+0000 7fd076d4d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T22:56:59.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.529+0000 7fd076d4d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T22:57:00.228 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:56:59 vm06 bash[20883]: debug 2026-03-08T22:56:59.961+0000 7fd076d4d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T22:57:00.228 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.001+0000 7fd076d4d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T22:57:00.228 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.041+0000 7fd076d4d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T22:57:00.228 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.181+0000 7fd076d4d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T22:57:00.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.221+0000 7fd076d4d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T22:57:00.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.261+0000 7fd076d4d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T22:57:00.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.369+0000 7fd076d4d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:57:00.789 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.525+0000 7fd076d4d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T22:57:00.790 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.697+0000 7fd076d4d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T22:57:00.790 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.737+0000 7fd076d4d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T22:57:01.173 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.785+0000 7fd076d4d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T22:57:01.173 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:00 vm06 bash[20883]: debug 2026-03-08T22:57:00.937+0000 7fd076d4d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:01 vm06 bash[20883]: debug 2026-03-08T22:57:01.165+0000 7fd076d4d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.174524+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.174524+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.174981+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.174981+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.199820+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.199820+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.199944+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0251017s) 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.199944+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0251017s) 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.202159+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.202159+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.202359+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.202359+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.203510+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.203510+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.203641+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.203641+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.203726+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.203726+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.209494+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: cluster 2026-03-08T22:57:01.209494+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.224588+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:01 vm06 bash[20625]: audit 2026-03-08T22:57:01.224588+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.248 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:57:02.248 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 7, 2026-03-08T22:57:02.248 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-08T22:57:02.248 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:57:02.248 INFO:teuthology.orchestra.run.vm06.stdout:mgr epoch 5 is available 2026-03-08T22:57:02.248 INFO:teuthology.orchestra.run.vm06.stdout:Setting orchestrator backend to cephadm... 2026-03-08T22:57:02.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: cephadm 2026-03-08T22:57:01.216793+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T22:57:02.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: cephadm 2026-03-08T22:57:01.216793+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T22:57:02.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.237993+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.237993+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.249781+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.249781+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.266217+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.266217+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.269013+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.269013+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.274091+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.274091+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.690283+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.690283+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.698127+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: audit 2026-03-08T22:57:01.698127+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: cluster 2026-03-08T22:57:02.200552+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.02571s) 2026-03-08T22:57:02.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:02 vm06 bash[20625]: cluster 2026-03-08T22:57:02.200552+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.02571s) 2026-03-08T22:57:02.889 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-08T22:57:02.889 INFO:teuthology.orchestra.run.vm06.stdout:Generating ssh key... 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: Generating public/private ed25519 key pair. 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: Your identification has been saved in /tmp/tmpdhbu93kc/key 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: Your public key has been saved in /tmp/tmpdhbu93kc/key.pub 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: The key fingerprint is: 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: SHA256:jzb/YeJeR7YiqzMRDmDb634NV8FlSpQh9qmVsWodnj4 ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: The key's randomart image is: 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: +--[ED25519 256]--+ 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | oo=+o | 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | o . ++B | 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | . + O. | 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | . o . *.o | 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | +S.+.+ o | 2026-03-08T22:57:03.421 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | . ++.. o . | 2026-03-08T22:57:03.422 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | . +=+ E o | 2026-03-08T22:57:03.422 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | ..++.* = | 2026-03-08T22:57:03.422 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: | ....==.. | 2026-03-08T22:57:03.422 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:03 vm06 bash[20883]: +----[SHA256]-----+ 2026-03-08T22:57:03.454 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6UbXXd8VrxvPrA3qKrmPS87uQum50pZikbr4Z7mWmF ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:03.454 INFO:teuthology.orchestra.run.vm06.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-08T22:57:03.454 INFO:teuthology.orchestra.run.vm06.stdout:Adding key to root@localhost authorized_keys... 2026-03-08T22:57:03.455 INFO:teuthology.orchestra.run.vm06.stdout:Adding host vm06... 2026-03-08T22:57:03.703 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.198667+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.198667+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.205010+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.205010+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.520279+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.520279+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.524015+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.524015+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.531001+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.531001+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.711483+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTING 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.711483+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTING 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.823125+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.823125+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.824007+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Client ('192.168.123.106', 37270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.824007+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Client ('192.168.123.106', 37270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.849018+0000 mgr.y (mgr.14118) 8 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.849018+0000 mgr.y (mgr.14118) 8 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.924678+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.924678+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.924736+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTED 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: cephadm 2026-03-08T22:57:02.924736+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTED 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.925439+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:02.925439+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:03.137200+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:03.137200+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:03.140189+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:03.704 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:03 vm06 bash[20625]: audit 2026-03-08T22:57:03.140189+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:04.647 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: audit 2026-03-08T22:57:03.111949+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: audit 2026-03-08T22:57:03.111949+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: cephadm 2026-03-08T22:57:03.112198+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: cephadm 2026-03-08T22:57:03.112198+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: audit 2026-03-08T22:57:03.410389+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: audit 2026-03-08T22:57:03.410389+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: audit 2026-03-08T22:57:03.695735+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "addr": "192.168.123.106", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: audit 2026-03-08T22:57:03.695735+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "addr": "192.168.123.106", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: cluster 2026-03-08T22:57:04.144123+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-08T22:57:04.648 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:04 vm06 bash[20625]: cluster 2026-03-08T22:57:04.144123+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-08T22:57:05.669 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Added host 'vm06' with addr '192.168.123.106' 2026-03-08T22:57:05.669 INFO:teuthology.orchestra.run.vm06.stdout:Deploying unmanaged mon service... 2026-03-08T22:57:05.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:05 vm06 bash[20625]: cephadm 2026-03-08T22:57:04.311685+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm06 2026-03-08T22:57:05.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:05 vm06 bash[20625]: cephadm 2026-03-08T22:57:04.311685+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm06 2026-03-08T22:57:06.049 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-08T22:57:06.049 INFO:teuthology.orchestra.run.vm06.stdout:Deploying unmanaged mgr service... 2026-03-08T22:57:06.347 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:05.603234+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:05.603234+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: cephadm 2026-03-08T22:57:05.603601+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm06 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: cephadm 2026-03-08T22:57:05.603601+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm06 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:05.604213+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:05.604213+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:06.006170+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:06.006170+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:06.305201+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:06.305201+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:06.571044+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.106:0/3322939435' entity='client.admin' 2026-03-08T22:57:06.871 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:06 vm06 bash[20625]: audit 2026-03-08T22:57:06.571044+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.106:0/3322939435' entity='client.admin' 2026-03-08T22:57:06.904 INFO:teuthology.orchestra.run.vm06.stdout:Enabling the dashboard module... 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:06.002053+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:06.002053+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: cephadm 2026-03-08T22:57:06.002965+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: cephadm 2026-03-08T22:57:06.002965+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:06.300791+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:06.300791+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: cephadm 2026-03-08T22:57:06.301754+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: cephadm 2026-03-08T22:57:06.301754+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:06.854617+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.106:0/3431215985' entity='client.admin' 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:06.854617+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.106:0/3431215985' entity='client.admin' 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:07.199362+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:07.199362+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:07.236749+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:07.236749+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:07.490052+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:08.221 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:07 vm06 bash[20625]: audit 2026-03-08T22:57:07.490052+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:08.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:08 vm06 bash[20883]: ignoring --setuser ceph since I am not root 2026-03-08T22:57:08.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:08 vm06 bash[20883]: ignoring --setgroup ceph since I am not root 2026-03-08T22:57:08.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:08 vm06 bash[20883]: debug 2026-03-08T22:57:08.345+0000 7fdaa0c93140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T22:57:08.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:08 vm06 bash[20883]: debug 2026-03-08T22:57:08.389+0000 7fdaa0c93140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "epoch": 9, 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "active_name": "y", 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for the mgr to restart... 2026-03-08T22:57:08.730 INFO:teuthology.orchestra.run.vm06.stdout:Waiting for mgr epoch 9... 2026-03-08T22:57:08.826 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:08 vm06 bash[20883]: debug 2026-03-08T22:57:08.533+0000 7fdaa0c93140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T22:57:09.209 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:08 vm06 bash[20883]: debug 2026-03-08T22:57:08.849+0000 7fdaa0c93140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:09 vm06 bash[20625]: audit 2026-03-08T22:57:08.202109+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:09 vm06 bash[20625]: audit 2026-03-08T22:57:08.202109+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:09 vm06 bash[20625]: cluster 2026-03-08T22:57:08.210244+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:09 vm06 bash[20625]: cluster 2026-03-08T22:57:08.210244+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:09 vm06 bash[20625]: audit 2026-03-08T22:57:08.640780+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.106:0/723172848' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:09 vm06 bash[20625]: audit 2026-03-08T22:57:08.640780+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.106:0/723172848' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.309+0000 7fdaa0c93140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T22:57:09.511 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.389+0000 7fdaa0c93140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: from numpy import show_config as show_numpy_config 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.509+0000 7fdaa0c93140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.637+0000 7fdaa0c93140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.673+0000 7fdaa0c93140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.709+0000 7fdaa0c93140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T22:57:09.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.753+0000 7fdaa0c93140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T22:57:10.247 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:09 vm06 bash[20883]: debug 2026-03-08T22:57:09.801+0000 7fdaa0c93140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T22:57:10.500 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.241+0000 7fdaa0c93140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T22:57:10.500 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.277+0000 7fdaa0c93140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T22:57:10.500 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.317+0000 7fdaa0c93140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T22:57:10.500 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.457+0000 7fdaa0c93140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T22:57:10.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.493+0000 7fdaa0c93140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T22:57:10.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.533+0000 7fdaa0c93140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T22:57:10.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.645+0000 7fdaa0c93140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:57:11.066 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.805+0000 7fdaa0c93140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T22:57:11.066 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:10 vm06 bash[20883]: debug 2026-03-08T22:57:10.981+0000 7fdaa0c93140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T22:57:11.066 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:11 vm06 bash[20883]: debug 2026-03-08T22:57:11.017+0000 7fdaa0c93140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T22:57:11.437 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:11 vm06 bash[20883]: debug 2026-03-08T22:57:11.061+0000 7fdaa0c93140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T22:57:11.437 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:11 vm06 bash[20883]: debug 2026-03-08T22:57:11.209+0000 7fdaa0c93140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:57:11.437 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:11 vm06 bash[20883]: debug 2026-03-08T22:57:11.429+0000 7fdaa0c93140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.437487+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.437487+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.437917+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.437917+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.443656+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.443656+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.443831+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00602005s) 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.443831+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00602005s) 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.446814+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.446814+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.447648+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.447648+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.448668+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.448668+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.448981+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.448981+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.449300+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.449300+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.455585+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: cluster 2026-03-08T22:57:11.455585+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.478002+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.478002+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.478508+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:11 vm06 bash[20625]: audit 2026-03-08T22:57:11.478508+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:12.488 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout { 2026-03-08T22:57:12.488 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 11, 2026-03-08T22:57:12.488 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-08T22:57:12.488 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout } 2026-03-08T22:57:12.488 INFO:teuthology.orchestra.run.vm06.stdout:mgr epoch 9 is available 2026-03-08T22:57:12.488 INFO:teuthology.orchestra.run.vm06.stdout:Generating a dashboard self-signed certificate... 2026-03-08T22:57:12.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:12 vm06 bash[20625]: audit 2026-03-08T22:57:11.496476+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:12.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:12 vm06 bash[20625]: audit 2026-03-08T22:57:11.496476+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:12.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:12 vm06 bash[20625]: cluster 2026-03-08T22:57:12.447282+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00948s) 2026-03-08T22:57:12.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:12 vm06 bash[20625]: cluster 2026-03-08T22:57:12.447282+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00948s) 2026-03-08T22:57:13.023 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-08T22:57:13.023 INFO:teuthology.orchestra.run.vm06.stdout:Creating initial admin user... 2026-03-08T22:57:13.509 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$ddxpSJwCiFSjQUcJ9EOTU.C7v/ayZko1Psgs4rOTfoAZSowxt3dym", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773010633, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-08T22:57:13.509 INFO:teuthology.orchestra.run.vm06.stdout:Fetching dashboard port number... 2026-03-08T22:57:13.822 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stdout 8443 2026-03-08T22:57:13.822 INFO:teuthology.orchestra.run.vm06.stdout:firewalld does not appear to be present 2026-03-08T22:57:13.822 INFO:teuthology.orchestra.run.vm06.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout:Ceph Dashboard is now available at: 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout: URL: https://vm06.local:8443/ 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout: User: admin 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout: Password: 2vwuzer5di 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:13.823 INFO:teuthology.orchestra.run.vm06.stdout:Saving cluster configuration to /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config directory 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.344216+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTING 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.344216+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTING 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.448025+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.448025+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.451760+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.451760+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.452128+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.452128+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.457566+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Client ('192.168.123.106', 49382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.457566+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Client ('192.168.123.106', 49382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.553258+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.553258+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.553440+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTED 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: cephadm 2026-03-08T22:57:12.553440+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTED 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.720940+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.720940+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.952198+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.952198+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.959793+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:12.959793+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:13.446761+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:13.446761+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:13.781267+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.106:0/3801189881' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T22:57:14.113 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:13 vm06 bash[20625]: audit 2026-03-08T22:57:13.781267+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.106:0/3801189881' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout:Or, if you are only running a single cluster on this host: 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.139 INFO:teuthology.orchestra.run.vm06.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout: ceph telemetry on 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout:For more information see: 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:14.140 INFO:teuthology.orchestra.run.vm06.stdout:Bootstrap complete. 2026-03-08T22:57:14.159 INFO:tasks.cephadm:Fetching config... 2026-03-08T22:57:14.159 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:57:14.159 DEBUG:teuthology.orchestra.run.vm06:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-08T22:57:14.162 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-08T22:57:14.162 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:57:14.162 DEBUG:teuthology.orchestra.run.vm06:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-08T22:57:14.207 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-08T22:57:14.208 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:57:14.208 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.a/keyring of=/dev/stdout 2026-03-08T22:57:14.257 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-08T22:57:14.257 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:57:14.257 DEBUG:teuthology.orchestra.run.vm06:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-08T22:57:14.303 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-08T22:57:14.303 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6UbXXd8VrxvPrA3qKrmPS87uQum50pZikbr4Z7mWmF ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-08T22:57:14.356 INFO:teuthology.orchestra.run.vm06.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6UbXXd8VrxvPrA3qKrmPS87uQum50pZikbr4Z7mWmF ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:14.360 DEBUG:teuthology.orchestra.run.vm11:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6UbXXd8VrxvPrA3qKrmPS87uQum50pZikbr4Z7mWmF ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-08T22:57:14.372 INFO:teuthology.orchestra.run.vm11.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF6UbXXd8VrxvPrA3qKrmPS87uQum50pZikbr4Z7mWmF ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:14.377 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-08T22:57:15.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:14 vm06 bash[20625]: audit 2026-03-08T22:57:13.291322+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:15.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:14 vm06 bash[20625]: audit 2026-03-08T22:57:13.291322+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:15.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:14 vm06 bash[20625]: cluster 2026-03-08T22:57:13.964023+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-08T22:57:15.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:14 vm06 bash[20625]: cluster 2026-03-08T22:57:13.964023+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-08T22:57:15.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:14 vm06 bash[20625]: audit 2026-03-08T22:57:14.100744+0000 mon.a (mon.0) 92 : audit [INF] from='client.? 192.168.123.106:0/1251462135' entity='client.admin' 2026-03-08T22:57:15.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:14 vm06 bash[20625]: audit 2026-03-08T22:57:14.100744+0000 mon.a (mon.0) 92 : audit [INF] from='client.? 192.168.123.106:0/1251462135' entity='client.admin' 2026-03-08T22:57:17.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:17 vm06 bash[20625]: audit 2026-03-08T22:57:16.398549+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:17.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:17 vm06 bash[20625]: audit 2026-03-08T22:57:16.398549+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:17.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:17 vm06 bash[20625]: audit 2026-03-08T22:57:16.967602+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:17.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:17 vm06 bash[20625]: audit 2026-03-08T22:57:16.967602+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:18.275 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.a/config 2026-03-08T22:57:18.622 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-08T22:57:18.622 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-08T22:57:19.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:19 vm06 bash[20625]: cluster 2026-03-08T22:57:18.403904+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-08T22:57:19.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:19 vm06 bash[20625]: cluster 2026-03-08T22:57:18.403904+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-08T22:57:19.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:19 vm06 bash[20625]: audit 2026-03-08T22:57:18.557071+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.106:0/83386232' entity='client.admin' 2026-03-08T22:57:19.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:19 vm06 bash[20625]: audit 2026-03-08T22:57:18.557071+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.106:0/83386232' entity='client.admin' 2026-03-08T22:57:23.192 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.a/config 2026-03-08T22:57:23.514 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm11 2026-03-08T22:57:23.514 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:57:23.514 DEBUG:teuthology.orchestra.run.vm11:> dd of=/etc/ceph/ceph.conf 2026-03-08T22:57:23.518 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:57:23.518 DEBUG:teuthology.orchestra.run.vm11:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:23.563 INFO:tasks.cephadm:Adding host vm11 to orchestrator... 2026-03-08T22:57:23.563 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch host add vm11 2026-03-08T22:57:23.727 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.723498+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.727 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.723498+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.727 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.730011+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.730011+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.730695+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.730695+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.737414+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.737414+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.743131+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.743131+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.752221+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:22.752221+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.431074+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.431074+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.431667+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.431667+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.432717+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.432717+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.433199+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.433199+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.577385+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.577385+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.580012+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.580012+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.582354+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:23.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:23 vm06 bash[20625]: audit 2026-03-08T22:57:23.582354+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: audit 2026-03-08T22:57:23.428179+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: audit 2026-03-08T22:57:23.428179+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.433869+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.433869+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.471779+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.471779+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.517218+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.517218+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.546948+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:24 vm06 bash[20625]: cephadm 2026-03-08T22:57:23.546948+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:28.175 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.a/config 2026-03-08T22:57:29.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:29 vm06 bash[20625]: audit 2026-03-08T22:57:28.454114+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm11", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:29.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:29 vm06 bash[20625]: audit 2026-03-08T22:57:28.454114+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm11", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:29.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:29 vm06 bash[20625]: cephadm 2026-03-08T22:57:28.961861+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-08T22:57:29.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:29 vm06 bash[20625]: cephadm 2026-03-08T22:57:28.961861+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-08T22:57:30.139 INFO:teuthology.orchestra.run.vm06.stdout:Added host 'vm11' with addr '192.168.123.111' 2026-03-08T22:57:30.200 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch host ls --format=json 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: audit 2026-03-08T22:57:30.136582+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: audit 2026-03-08T22:57:30.136582+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: cephadm 2026-03-08T22:57:30.137000+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm11 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: cephadm 2026-03-08T22:57:30.137000+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm11 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: audit 2026-03-08T22:57:30.137260+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: audit 2026-03-08T22:57:30.137260+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: audit 2026-03-08T22:57:30.422715+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:31 vm06 bash[20625]: audit 2026-03-08T22:57:30.422715+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:33.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:32 vm06 bash[20625]: cluster 2026-03-08T22:57:31.449626+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:33.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:32 vm06 bash[20625]: cluster 2026-03-08T22:57:31.449626+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:33.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:32 vm06 bash[20625]: audit 2026-03-08T22:57:31.697839+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:33.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:32 vm06 bash[20625]: audit 2026-03-08T22:57:31.697839+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:33.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:32 vm06 bash[20625]: audit 2026-03-08T22:57:32.235581+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:33.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:32 vm06 bash[20625]: audit 2026-03-08T22:57:32.235581+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:34.819 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.a/config 2026-03-08T22:57:35.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:34 vm06 bash[20625]: cluster 2026-03-08T22:57:33.449784+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:35.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:34 vm06 bash[20625]: cluster 2026-03-08T22:57:33.449784+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:35.104 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:57:35.104 INFO:teuthology.orchestra.run.vm06.stdout:[{"addr": "192.168.123.106", "hostname": "vm06", "labels": [], "status": ""}, {"addr": "192.168.123.111", "hostname": "vm11", "labels": [], "status": ""}] 2026-03-08T22:57:35.156 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-08T22:57:35.156 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd crush tunables default 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.926113+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.926113+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.928548+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.928548+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.931416+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.931416+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.933964+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.933964+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.934567+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.934567+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.935138+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.935138+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.935485+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:34.935485+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:34.936009+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:34.936009+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:34.971549+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:34.971549+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:35.000633+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:35.000633+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:35.031228+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: cephadm 2026-03-08T22:57:35.031228+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.065610+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.065610+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.067622+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.067622+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.069492+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.069492+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.104144+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T22:57:36.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:35 vm06 bash[20625]: audit 2026-03-08T22:57:35.104144+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T22:57:37.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:37 vm06 bash[20625]: cluster 2026-03-08T22:57:35.450040+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:37.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:37 vm06 bash[20625]: cluster 2026-03-08T22:57:35.450040+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:38.827 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.a/config 2026-03-08T22:57:39.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:39 vm06 bash[20625]: cluster 2026-03-08T22:57:37.450302+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:39.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:39 vm06 bash[20625]: cluster 2026-03-08T22:57:37.450302+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:40.082 INFO:teuthology.orchestra.run.vm06.stderr:adjusted tunables profile to default 2026-03-08T22:57:40.140 INFO:tasks.cephadm:Adding mon.a on vm06 2026-03-08T22:57:40.140 INFO:tasks.cephadm:Adding mon.c on vm06 2026-03-08T22:57:40.140 INFO:tasks.cephadm:Adding mon.b on vm11 2026-03-08T22:57:40.140 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply mon '3;vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b' 2026-03-08T22:57:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:40 vm06 bash[20625]: audit 2026-03-08T22:57:39.080755+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T22:57:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:40 vm06 bash[20625]: audit 2026-03-08T22:57:39.080755+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T22:57:41.244 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:41.486 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled mon update... 2026-03-08T22:57:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:41 vm06 bash[20625]: cluster 2026-03-08T22:57:39.450449+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:41 vm06 bash[20625]: cluster 2026-03-08T22:57:39.450449+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:41 vm06 bash[20625]: audit 2026-03-08T22:57:40.081841+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T22:57:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:41 vm06 bash[20625]: audit 2026-03-08T22:57:40.081841+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T22:57:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:41 vm06 bash[20625]: cluster 2026-03-08T22:57:40.083336+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:41 vm06 bash[20625]: cluster 2026-03-08T22:57:40.083336+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:41.566 DEBUG:teuthology.orchestra.run.vm06:mon.c> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.c.service 2026-03-08T22:57:41.567 DEBUG:teuthology.orchestra.run.vm11:mon.b> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.b.service 2026-03-08T22:57:41.568 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-08T22:57:41.568 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph mon dump -f json 2026-03-08T22:57:42.725 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T22:57:42.731 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:42 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: cluster 2026-03-08T22:57:41.450605+0000 mgr.y (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: cluster 2026-03-08T22:57:41.450605+0000 mgr.y (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.481598+0000 mgr.y (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.481598+0000 mgr.y (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: cephadm 2026-03-08T22:57:41.482650+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b;count:3 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: cephadm 2026-03-08T22:57:41.482650+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b;count:3 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.485345+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.485345+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.485989+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.485989+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.487131+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.487131+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.487617+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.487617+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.491646+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.491646+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.493113+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.493113+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.493654+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: audit 2026-03-08T22:57:41.493654+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: cephadm 2026-03-08T22:57:41.494259+0000 mgr.y (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-08T22:57:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:42 vm06 bash[20625]: cephadm 2026-03-08T22:57:41.494259+0000 mgr.y (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-08T22:57:43.054 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:42 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:43.054 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:42 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:43.054 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:42 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:43.054 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:42 vm11 systemd[1]: Started Ceph mon.b for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T22:57:43.191 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:57:43.191 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":1,"fsid":"e2eb96e6-1b41-11f1-83e5-75f1b5373d30","modified":"2026-03-08T22:56:48.853084Z","created":"2026-03-08T22:56:48.853084Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]} 2026-03-08T22:57:43.191 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 1 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.083+0000 7fd94f73fd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.083+0000 7fd94f73fd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 8 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.083+0000 7fd94f73fd80 0 pidfile_write: ignore empty --pid-file 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 0 load: jerasure load: lrc 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Git sha 0 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-08T22:57:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: DB SUMMARY 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: DB Session ID: 5ZYB51ZOUXF8AIZB880H 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: CURRENT file: CURRENT 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.error_if_exists: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.create_if_missing: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.env: 0x55bad0a35dc0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.info_log: 0x55baf0c42700 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.statistics: (nil) 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.use_fsync: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.db_log_dir: 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.wal_dir: 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.write_buffer_manager: 0x55baf0c47900 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.unordered_write: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.row_cache: None 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.wal_filter: None 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.two_write_queues: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.wal_compression: 0 2026-03-08T22:57:43.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.atomic_flush: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_open_files: -1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Compression algorithms supported: 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kZSTD supported: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kXpressCompression supported: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kZlibCompression supported: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.merge_operator: 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_filter: None 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55baf0c42640) 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cache_index_and_filter_blocks: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: pin_top_level_index_and_filter: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: index_type: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: data_block_index_type: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: index_shortening: 1 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: data_block_hash_table_util_ratio: 0.750000 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: checksum: 4 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: no_block_cache: 0 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_cache: 0x55baf0c69350 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_cache_name: BinnedLRUCache 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_cache_options: 2026-03-08T22:57:43.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: capacity : 536870912 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: num_shard_bits : 4 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: strict_capacity_limit : 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: high_pri_pool_ratio: 0.000 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_cache_compressed: (nil) 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: persistent_cache: (nil) 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_size: 4096 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_size_deviation: 10 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_restart_interval: 16 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: index_block_restart_interval: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: metadata_block_size: 4096 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: partition_filters: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: use_delta_encoding: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: filter_policy: bloomfilter 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: whole_key_filtering: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: verify_compression: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: read_amp_bytes_per_bit: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: format_version: 5 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: enable_index_compression: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: block_align: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: max_auto_readahead_size: 262144 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: prepopulate_block_cache: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: initial_auto_readahead_size: 8192 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: num_file_reads_for_auto_readahead: 2 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression: NoCompression 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.num_levels: 7 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-08T22:57:43.311 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.bloom_locality: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.ttl: 2592000 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.enable_blob_files: false 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.min_blob_size: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.087+0000 7fd94f73fd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-08T22:57:43.312 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.091+0000 7fd94f73fd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.091+0000 7fd94f73fd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.091+0000 7fd94f73fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 64b0c30d-023e-477b-8e78-6ab7114bb689 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.091+0000 7fd94f73fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773010663096582, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.091+0000 7fd94f73fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.095+0000 7fd94f73fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773010663100081, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773010663, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "64b0c30d-023e-477b-8e78-6ab7114bb689", "db_session_id": "5ZYB51ZOUXF8AIZB880H", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.095+0000 7fd94f73fd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773010663100133, "job": 1, "event": "recovery_finished"} 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.095+0000 7fd94f73fd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.103+0000 7fd94f73fd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.103+0000 7fd94f73fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55baf0c6ae00 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.103+0000 7fd94f73fd80 4 rocksdb: DB pointer 0x55baf0d80000 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.103+0000 7fd945509640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.103+0000 7fd945509640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: ** DB Stats ** 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: ** Compaction Stats [default] ** 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.00 0.00 1 0.003 0 0 0.0 0.0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.00 0.00 1 0.003 0 0 0.0 0.0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.4 0.00 0.00 1 0.003 0 0 0.0 0.0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: ** Compaction Stats [default] ** 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.00 0.00 1 0.003 0 0 0.0 0.0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: AddFile(Total Files): cumulative 0, interval 0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: AddFile(Keys): cumulative 0, interval 0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Cumulative compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Interval compaction: 0.00 GB write, 0.08 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Block cache BinnedLRUCache@0x55baf0c69350#8 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.1e-05 secs_since: 0 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-08T22:57:43.313 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: ** File Read Latency Histogram By Level [default] ** 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.103+0000 7fd94f73fd80 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.107+0000 7fd94f73fd80 0 using public_addr v2:192.168.123.111:0/0 -> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.107+0000 7fd94f73fd80 0 starting mon.b rank -1 at public addrs [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] at bind addrs [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.107+0000 7fd94f73fd80 1 mon.b@-1(???) e0 preinit fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.143+0000 7fd94850f640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: e1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: btime 2026-03-08T22:56:50:042781+0000 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: legacy client fscid: -1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: No filesystems configured 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.147+0000 7fd94850f640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.155+0000 7fd94850f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.155+0000 7fd94850f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.155+0000 7fd94850f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.155+0000 7fd94850f640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.043242+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.043242+0000 mon.a (mon.0) 0 : cluster [INF] mkfs e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.037725+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.037725+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907875+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907875+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907909+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907909+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907914+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907914+0000 mon.a (mon.0) 3 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907916+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907916+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907922+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907922+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907927+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907927+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907941+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907941+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907943+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.907943+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.908157+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.908157+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.908171+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.908171+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.908572+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:50.908572+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:51.124092+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.106:0/3759692065' entity='client.admin' 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:51.124092+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.106:0/3759692065' entity='client.admin' 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:51.732231+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.106:0/3683851812' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:51.732231+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.106:0/3683851812' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.138601+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.106:0/4228632031' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.138601+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.106:0/4228632031' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:54.912358+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:54.912358+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:54.920019+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00772146s) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:54.920019+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: y(active, starting, since 0.00772146s) 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.920994+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.920994+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:43.314 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.921104+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.921104+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.921186+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.921186+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.922157+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.922157+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.922244+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.922244+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:54.929285+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:54.929285+0000 mon.a (mon.0) 22 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.939623+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.939623+0000 mon.a (mon.0) 23 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.941884+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.941884+0000 mon.a (mon.0) 24 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.944463+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.944463+0000 mon.a (mon.0) 25 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.944871+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.944871+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.947494+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:54.947494+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14100 192.168.123.106:0/399162631' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:55.926686+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01439s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:55.926686+0000 mon.a (mon.0) 28 : cluster [DBG] mgrmap e3: y(active, since 1.01439s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:56.554358+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.106:0/716421922' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:56.554358+0000 mon.a (mon.0) 29 : audit [DBG] from='client.? 192.168.123.106:0/716421922' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:56.808872+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.106:0/2037649355' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:56.808872+0000 mon.a (mon.0) 30 : audit [INF] from='client.? 192.168.123.106:0/2037649355' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:56.926118+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:56.926118+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e4: y(active, since 2s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:57.085640+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:57.085640+0000 mon.a (mon.0) 32 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:57.930339+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:57.930339+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.106:0/3781783341' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:57.933237+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:56:57.933237+0000 mon.a (mon.0) 34 : cluster [DBG] mgrmap e5: y(active, since 3s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:58.244704+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.106:0/663254225' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:56:58.244704+0000 mon.a (mon.0) 35 : audit [DBG] from='client.? 192.168.123.106:0/663254225' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.174524+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.174524+0000 mon.a (mon.0) 36 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.174981+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.174981+0000 mon.a (mon.0) 37 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.199820+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.199820+0000 mon.a (mon.0) 38 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.199944+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0251017s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.199944+0000 mon.a (mon.0) 39 : cluster [DBG] mgrmap e6: y(active, starting, since 0.0251017s) 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.202159+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.202159+0000 mon.a (mon.0) 40 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.202359+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.202359+0000 mon.a (mon.0) 41 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.203510+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.203510+0000 mon.a (mon.0) 42 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.203641+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.203641+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.203726+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.203726+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.209494+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:01.209494+0000 mon.a (mon.0) 45 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.224588+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.224588+0000 mon.a (mon.0) 46 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:01.216793+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:01.216793+0000 mgr.y (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.237993+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.237993+0000 mon.a (mon.0) 47 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.249781+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.315 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.249781+0000 mon.a (mon.0) 48 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.266217+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.266217+0000 mon.a (mon.0) 49 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.269013+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.269013+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.274091+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.274091+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.690283+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.690283+0000 mon.a (mon.0) 52 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.698127+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:01.698127+0000 mon.a (mon.0) 53 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:02.200552+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.02571s) 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:02.200552+0000 mon.a (mon.0) 54 : cluster [DBG] mgrmap e7: y(active, since 1.02571s) 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.198667+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.198667+0000 mgr.y (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.205010+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.205010+0000 mgr.y (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.520279+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.520279+0000 mgr.y (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.524015+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.524015+0000 mon.a (mon.0) 55 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.531001+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.531001+0000 mon.a (mon.0) 56 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.711483+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTING 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.711483+0000 mgr.y (mgr.14118) 5 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTING 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.823125+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.823125+0000 mgr.y (mgr.14118) 6 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.824007+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Client ('192.168.123.106', 37270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.824007+0000 mgr.y (mgr.14118) 7 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Client ('192.168.123.106', 37270) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.849018+0000 mgr.y (mgr.14118) 8 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.849018+0000 mgr.y (mgr.14118) 8 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.924678+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.924678+0000 mgr.y (mgr.14118) 9 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.924736+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTED 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:02.924736+0000 mgr.y (mgr.14118) 10 : cephadm [INF] [08/Mar/2026:22:57:02] ENGINE Bus STARTED 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.925439+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:02.925439+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.137200+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.137200+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.140189+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.140189+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.111949+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.111949+0000 mgr.y (mgr.14118) 11 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:03.112198+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:03.112198+0000 mgr.y (mgr.14118) 12 : cephadm [INF] Generating ssh key... 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.410389+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.410389+0000 mgr.y (mgr.14118) 13 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.695735+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "addr": "192.168.123.106", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:03.695735+0000 mgr.y (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm06", "addr": "192.168.123.106", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:04.144123+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:04.144123+0000 mon.a (mon.0) 60 : cluster [DBG] mgrmap e8: y(active, since 2s) 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:04.311685+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm06 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:04.311685+0000 mgr.y (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm06 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:05.603234+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:05.603234+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:05.603601+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm06 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:05.603601+0000 mgr.y (mgr.14118) 16 : cephadm [INF] Added host vm06 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:05.604213+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:05.604213+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.006170+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.006170+0000 mon.a (mon.0) 63 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.305201+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.305201+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.571044+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.106:0/3322939435' entity='client.admin' 2026-03-08T22:57:43.316 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.571044+0000 mon.a (mon.0) 65 : audit [INF] from='client.? 192.168.123.106:0/3322939435' entity='client.admin' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.002053+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.002053+0000 mgr.y (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:06.002965+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:06.002965+0000 mgr.y (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.300791+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.300791+0000 mgr.y (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:06.301754+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:06.301754+0000 mgr.y (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.854617+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.106:0/3431215985' entity='client.admin' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:06.854617+0000 mon.a (mon.0) 66 : audit [INF] from='client.? 192.168.123.106:0/3431215985' entity='client.admin' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:07.199362+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:07.199362+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:07.236749+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:07.236749+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:07.490052+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:07.490052+0000 mon.a (mon.0) 69 : audit [INF] from='mgr.14118 192.168.123.106:0/3226879139' entity='mgr.y' 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:08.202109+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:08.202109+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.106:0/1044126831' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:08.210244+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:08.210244+0000 mon.a (mon.0) 71 : cluster [DBG] mgrmap e9: y(active, since 7s) 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:08.640780+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.106:0/723172848' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:08.640780+0000 mon.a (mon.0) 72 : audit [DBG] from='client.? 192.168.123.106:0/723172848' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.437487+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.437487+0000 mon.a (mon.0) 73 : cluster [INF] Active manager daemon y restarted 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.437917+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.437917+0000 mon.a (mon.0) 74 : cluster [INF] Activating manager daemon y 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.443656+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.443656+0000 mon.a (mon.0) 75 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.443831+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00602005s) 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.443831+0000 mon.a (mon.0) 76 : cluster [DBG] mgrmap e10: y(active, starting, since 0.00602005s) 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.446814+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.446814+0000 mon.a (mon.0) 77 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.447648+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.447648+0000 mon.a (mon.0) 78 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.448668+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.448668+0000 mon.a (mon.0) 79 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.448981+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.448981+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.449300+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.449300+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.455585+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:11.455585+0000 mon.a (mon.0) 82 : cluster [INF] Manager daemon y is now available 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.478002+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.478002+0000 mon.a (mon.0) 83 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.478508+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.478508+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.496476+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:11.496476+0000 mon.a (mon.0) 85 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:12.447282+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00948s) 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:12.447282+0000 mon.a (mon.0) 86 : cluster [DBG] mgrmap e11: y(active, since 1.00948s) 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.344216+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTING 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.344216+0000 mgr.y (mgr.14150) 1 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTING 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.448025+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.448025+0000 mgr.y (mgr.14150) 2 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.451760+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.451760+0000 mgr.y (mgr.14150) 3 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.452128+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.452128+0000 mgr.y (mgr.14150) 4 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.457566+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Client ('192.168.123.106', 49382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.457566+0000 mgr.y (mgr.14150) 5 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Client ('192.168.123.106', 49382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.553258+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.553258+0000 mgr.y (mgr.14150) 6 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.553440+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTED 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:12.553440+0000 mgr.y (mgr.14150) 7 : cephadm [INF] [08/Mar/2026:22:57:12] ENGINE Bus STARTED 2026-03-08T22:57:43.317 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.720940+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.720940+0000 mgr.y (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.952198+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.952198+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.959793+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:12.959793+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:13.446761+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:13.446761+0000 mon.a (mon.0) 89 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:13.781267+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.106:0/3801189881' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:13.781267+0000 mon.a (mon.0) 90 : audit [DBG] from='client.? 192.168.123.106:0/3801189881' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:13.291322+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:13.291322+0000 mgr.y (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:13.964023+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:13.964023+0000 mon.a (mon.0) 91 : cluster [DBG] mgrmap e12: y(active, since 2s) 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:14.100744+0000 mon.a (mon.0) 92 : audit [INF] from='client.? 192.168.123.106:0/1251462135' entity='client.admin' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:14.100744+0000 mon.a (mon.0) 92 : audit [INF] from='client.? 192.168.123.106:0/1251462135' entity='client.admin' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:16.398549+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:16.398549+0000 mon.a (mon.0) 93 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:16.967602+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:16.967602+0000 mon.a (mon.0) 94 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:18.403904+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:18.403904+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e13: y(active, since 6s) 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:18.557071+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.106:0/83386232' entity='client.admin' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:18.557071+0000 mon.a (mon.0) 96 : audit [INF] from='client.? 192.168.123.106:0/83386232' entity='client.admin' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.723498+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.723498+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.730011+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.730011+0000 mon.a (mon.0) 98 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.730695+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.730695+0000 mon.a (mon.0) 99 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.737414+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.737414+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.743131+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.743131+0000 mon.a (mon.0) 101 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.752221+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:22.752221+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.431074+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.431074+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.431667+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.431667+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.432717+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.432717+0000 mon.a (mon.0) 105 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.433199+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.433199+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.577385+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.577385+0000 mon.a (mon.0) 107 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.580012+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.580012+0000 mon.a (mon.0) 108 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.582354+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.582354+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.428179+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:23.428179+0000 mgr.y (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.433869+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.433869+0000 mgr.y (mgr.14150) 11 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.471779+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.471779+0000 mgr.y (mgr.14150) 12 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.517218+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:43.318 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.517218+0000 mgr.y (mgr.14150) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.546948+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:23.546948+0000 mgr.y (mgr.14150) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:28.454114+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm11", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:28.454114+0000 mgr.y (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm11", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:28.961861+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:28.961861+0000 mgr.y (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm11 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:30.136582+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:30.136582+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:30.137000+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm11 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:30.137000+0000 mgr.y (mgr.14150) 17 : cephadm [INF] Added host vm11 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:30.137260+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:30.137260+0000 mon.a (mon.0) 111 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:30.422715+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:30.422715+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:31.449626+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:31.449626+0000 mgr.y (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:31.697839+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:31.697839+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:32.235581+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:32.235581+0000 mon.a (mon.0) 114 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:33.449784+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:33.449784+0000 mgr.y (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.926113+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.926113+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.928548+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.928548+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.931416+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.931416+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.933964+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.933964+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.934567+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.934567+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.935138+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.935138+0000 mon.a (mon.0) 120 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.935485+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:34.935485+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:34.936009+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:34.936009+0000 mgr.y (mgr.14150) 20 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:34.971549+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:34.971549+0000 mgr.y (mgr.14150) 21 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:35.000633+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:35.000633+0000 mgr.y (mgr.14150) 22 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:35.031228+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:35.031228+0000 mgr.y (mgr.14150) 23 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.065610+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.065610+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.067622+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.067622+0000 mon.a (mon.0) 123 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.069492+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.069492+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.104144+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:35.104144+0000 mgr.y (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:35.450040+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:35.450040+0000 mgr.y (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:37.450302+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:37.450302+0000 mgr.y (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:39.080755+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:39.080755+0000 mon.a (mon.0) 125 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:39.450449+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:39.450449+0000 mgr.y (mgr.14150) 27 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:40.081841+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T22:57:43.319 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:40.081841+0000 mon.a (mon.0) 126 : audit [INF] from='client.? 192.168.123.106:0/3008141344' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:40.083336+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:40.083336+0000 mon.a (mon.0) 127 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:41.450605+0000 mgr.y (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cluster 2026-03-08T22:57:41.450605+0000 mgr.y (mgr.14150) 28 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.481598+0000 mgr.y (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.481598+0000 mgr.y (mgr.14150) 29 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:41.482650+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b;count:3 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:41.482650+0000 mgr.y (mgr.14150) 30 : cephadm [INF] Saving service mon spec with placement vm06:192.168.123.106=a;vm06:[v2:192.168.123.106:3301,v1:192.168.123.106:6790]=c;vm11:192.168.123.111=b;count:3 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.485345+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.485345+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.485989+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.485989+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.487131+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.487131+0000 mon.a (mon.0) 130 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.487617+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.487617+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.491646+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.491646+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.493113+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.493113+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.493654+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: audit 2026-03-08T22:57:41.493654+0000 mon.a (mon.0) 134 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:41.494259+0000 mgr.y (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: cephadm 2026-03-08T22:57:41.494259+0000 mgr.y (mgr.14150) 31 : cephadm [INF] Deploying daemon mon.b on vm11 2026-03-08T22:57:43.320 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:43 vm11 bash[23232]: debug 2026-03-08T22:57:43.223+0000 7fd94850f640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-08T22:57:43.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:43 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:43.781 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:43 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:44.269 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:43 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:44.270 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:43 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:57:44.271 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:44 vm06 bash[27746]: debug 2026-03-08T22:57:44.017+0000 7f8e8a0fe640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-08T22:57:44.279 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-08T22:57:44.279 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph mon dump -f json 2026-03-08T22:57:47.883 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T22:57:48.744 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 2 2026-03-08T22:57:48.744 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:57:48.745 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":2,"fsid":"e2eb96e6-1b41-11f1-83e5-75f1b5373d30","modified":"2026-03-08T22:57:43.233824Z","created":"2026-03-08T22:56:48.853084Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:3300","nonce":0},{"type":"v1","addr":"192.168.123.111:6789","nonce":0}]},"addr":"192.168.123.111:6789/0","public_addr":"192.168.123.111:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cephadm 2026-03-08T22:57:42.963559+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm06 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cephadm 2026-03-08T22:57:42.963559+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm06 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:43.237083+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:43.237083+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:43.237595+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:43.237595+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:43.238277+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:43.238277+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:43.450802+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:43.450802+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:44.026507+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:44.026507+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:44.232722+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:44.232722+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:45.026252+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:45.026252+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:45.232665+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:45.232665+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:45.234376+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:45.234376+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:45.450988+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:45.450988+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:46.026151+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:46.026151+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:46.232677+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:46.232677+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:47.026458+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:47.026458+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:47.233050+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:47.233050+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:47.451151+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:47.451151+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.026294+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.026294+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.233184+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.233184+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.246101+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.246101+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248895+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248895+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248914+0000 mon.a (mon.0) 157 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248914+0000 mon.a (mon.0) 157 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:48.754 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248923+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-08T22:57:43.233824+0000 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248923+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-08T22:57:43.233824+0000 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248931+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248931+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248946+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248946+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248954+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248954+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248963+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248963+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248972+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.248972+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249273+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249273+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249294+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249294+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249405+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249405+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249478+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: cluster 2026-03-08T22:57:48.249478+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.252303+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.252303+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.255176+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.255176+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.257746+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.257746+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.260228+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.260228+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.268935+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:48.755 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:48 vm11 bash[23232]: audit 2026-03-08T22:57:48.268935+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cephadm 2026-03-08T22:57:42.963559+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm06 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cephadm 2026-03-08T22:57:42.963559+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm06 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:43.237083+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:43.237083+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:43.237595+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:43.237595+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:43.238277+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:43.238277+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:43.450802+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:43.450802+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:44.026507+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:44.026507+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:44.232722+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:44.232722+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:45.026252+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:45.026252+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:45.232665+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:45.232665+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:45.234376+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:45.234376+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:45.450988+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:45.450988+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:46.026151+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:46.026151+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:46.232677+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:46.232677+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:47.026458+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:47.026458+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:47.233050+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:47.233050+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:47.451151+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:47.451151+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.026294+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.026294+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.233184+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.233184+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.246101+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.246101+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248895+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248895+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248914+0000 mon.a (mon.0) 157 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248914+0000 mon.a (mon.0) 157 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248923+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-08T22:57:43.233824+0000 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248923+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-08T22:57:43.233824+0000 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248931+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248931+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248946+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248946+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248954+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248954+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248963+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248963+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248972+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.248972+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249273+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249273+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-08T22:57:48.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249294+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249294+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249405+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249405+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249478+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: cluster 2026-03-08T22:57:48.249478+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.252303+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.252303+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.255176+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.255176+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.257746+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.257746+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.260228+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.260228+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.268935+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:48.782 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:48 vm06 bash[20625]: audit 2026-03-08T22:57:48.268935+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:49.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:49 vm06 bash[20625]: audit 2026-03-08T22:57:48.744471+0000 mon.a (mon.0) 173 : audit [DBG] from='client.? 192.168.123.111:0/3329122919' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:49.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:49 vm06 bash[20625]: audit 2026-03-08T22:57:48.744471+0000 mon.a (mon.0) 173 : audit [DBG] from='client.? 192.168.123.111:0/3329122919' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:49.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:49 vm06 bash[20625]: audit 2026-03-08T22:57:49.026717+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:49.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:49 vm06 bash[20625]: audit 2026-03-08T22:57:49.026717+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:49.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:49 vm06 bash[20625]: audit 2026-03-08T22:57:49.233286+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:49.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:49 vm06 bash[20625]: audit 2026-03-08T22:57:49.233286+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:49.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:49 vm11 bash[23232]: audit 2026-03-08T22:57:48.744471+0000 mon.a (mon.0) 173 : audit [DBG] from='client.? 192.168.123.111:0/3329122919' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:49.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:49 vm11 bash[23232]: audit 2026-03-08T22:57:48.744471+0000 mon.a (mon.0) 173 : audit [DBG] from='client.? 192.168.123.111:0/3329122919' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:49.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:49 vm11 bash[23232]: audit 2026-03-08T22:57:49.026717+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:49.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:49 vm11 bash[23232]: audit 2026-03-08T22:57:49.026717+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:49.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:49 vm11 bash[23232]: audit 2026-03-08T22:57:49.233286+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:49.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:49 vm11 bash[23232]: audit 2026-03-08T22:57:49.233286+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:49.809 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-08T22:57:49.809 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph mon dump -f json 2026-03-08T22:57:50.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:50 vm06 bash[20883]: debug 2026-03-08T22:57:50.229+0000 7fda6cfff640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-08T22:57:54.023 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T22:57:55.473 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:49.451381+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:49.451381+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:50.036821+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:50.036821+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:50.037976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:50.037976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:50.038092+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:50.038092+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:50.038258+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:50.038258+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:50.040153+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:50.040153+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:51.026919+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:51.026919+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:51.451608+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:51.451608+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:52.027310+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:52.027310+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:52.029178+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:52.029178+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:53.027021+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:53.027021+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:53.451815+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:53.451815+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:54.027192+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:54.027192+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.027411+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.027411+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.041001+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.041001+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-08T22:57:55.474 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044675+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044675+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044690+0000 mon.a (mon.0) 188 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044690+0000 mon.a (mon.0) 188 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044699+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-08T22:57:50.028159+0000 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044699+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-08T22:57:50.028159+0000 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044707+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044707+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044716+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044716+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044724+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044724+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044733+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044733+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044742+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044742+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044751+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] mon.c 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.044751+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] mon.c 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.046112+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.046112+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.046135+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.046135+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.046309+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.046309+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.049970+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: cluster 2026-03-08T22:57:55.049970+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.054817+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.054817+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.059397+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.059397+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.065427+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.065427+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.068376+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.068376+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.071573+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.071573+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.072426+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.072426+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.073024+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:55 vm06 bash[20625]: audit 2026-03-08T22:57:55.073024+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cephadm 2026-03-08T22:57:42.963559+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm06 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cephadm 2026-03-08T22:57:42.963559+0000 mgr.y (mgr.14150) 32 : cephadm [INF] Deploying daemon mon.c on vm06 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:43.237083+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:43.237083+0000 mon.a (mon.0) 142 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:43.237595+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:43.237595+0000 mon.a (mon.0) 143 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:43.238277+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:43.238277+0000 mon.a (mon.0) 144 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:43.450802+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:43.450802+0000 mgr.y (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.475 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:44.026507+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:44.026507+0000 mon.a (mon.0) 145 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:44.232722+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:44.232722+0000 mon.a (mon.0) 146 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:45.026252+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:45.026252+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:45.232665+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:45.232665+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:45.234376+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:45.234376+0000 mon.b (mon.1) 1 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:45.450988+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:45.450988+0000 mgr.y (mgr.14150) 34 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:46.026151+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:46.026151+0000 mon.a (mon.0) 149 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:46.232677+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:46.232677+0000 mon.a (mon.0) 150 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:47.026458+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:47.026458+0000 mon.a (mon.0) 151 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:47.233050+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:47.233050+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:47.451151+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:47.451151+0000 mgr.y (mgr.14150) 35 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.026294+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.026294+0000 mon.a (mon.0) 153 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.233184+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.233184+0000 mon.a (mon.0) 154 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.246101+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.246101+0000 mon.a (mon.0) 155 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,1) 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248895+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248895+0000 mon.a (mon.0) 156 : cluster [DBG] monmap epoch 2 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248914+0000 mon.a (mon.0) 157 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248914+0000 mon.a (mon.0) 157 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248923+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-08T22:57:43.233824+0000 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248923+0000 mon.a (mon.0) 158 : cluster [DBG] last_changed 2026-03-08T22:57:43.233824+0000 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248931+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248931+0000 mon.a (mon.0) 159 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248946+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248946+0000 mon.a (mon.0) 160 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248954+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248954+0000 mon.a (mon.0) 161 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248963+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248963+0000 mon.a (mon.0) 162 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248972+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.248972+0000 mon.a (mon.0) 163 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249273+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249273+0000 mon.a (mon.0) 164 : cluster [DBG] fsmap 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249294+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249294+0000 mon.a (mon.0) 165 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249405+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249405+0000 mon.a (mon.0) 166 : cluster [DBG] mgrmap e13: y(active, since 36s) 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249478+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:48.249478+0000 mon.a (mon.0) 167 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.252303+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.252303+0000 mon.a (mon.0) 168 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.255176+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.255176+0000 mon.a (mon.0) 169 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.257746+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.257746+0000 mon.a (mon.0) 170 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.260228+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.260228+0000 mon.a (mon.0) 171 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.268935+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:55.476 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.268935+0000 mon.a (mon.0) 172 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.744471+0000 mon.a (mon.0) 173 : audit [DBG] from='client.? 192.168.123.111:0/3329122919' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:48.744471+0000 mon.a (mon.0) 173 : audit [DBG] from='client.? 192.168.123.111:0/3329122919' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:49.026717+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:49.026717+0000 mon.a (mon.0) 174 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:49.233286+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:49.233286+0000 mon.a (mon.0) 175 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:49.451381+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:49.451381+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:50.036821+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:50.036821+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:50.037976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:50.037976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:50.038092+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:50.038092+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:50.038258+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:50.038258+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:50.040153+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:50.040153+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:51.026919+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:51.026919+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:51.451608+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:51.451608+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:52.027310+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:52.027310+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:52.029178+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:52.029178+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:53.027021+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:53.027021+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:53.451815+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:53.451815+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:54.027192+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:54.027192+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.027411+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.027411+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.041001+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.041001+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044675+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044675+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044690+0000 mon.a (mon.0) 188 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044690+0000 mon.a (mon.0) 188 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044699+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-08T22:57:50.028159+0000 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044699+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-08T22:57:50.028159+0000 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044707+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044707+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044716+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044716+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044724+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044724+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044733+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044733+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044742+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044742+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044751+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] mon.c 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.044751+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] mon.c 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.046112+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.046112+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-08T22:57:55.477 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.046135+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.046135+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.046309+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.046309+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.049970+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: cluster 2026-03-08T22:57:55.049970+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.054817+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.054817+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.059397+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.059397+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.065427+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.065427+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.068376+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.068376+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.071573+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.071573+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.072426+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.072426+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.073024+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:55.478 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:55 vm06 bash[27746]: audit 2026-03-08T22:57:55.073024+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:49.451381+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:49.451381+0000 mgr.y (mgr.14150) 36 : cluster [DBG] pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:50.036821+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:50.036821+0000 mon.a (mon.0) 177 : cluster [INF] mon.a calling monitor election 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:50.037976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:50.037976+0000 mon.a (mon.0) 178 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:50.038092+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:50.038092+0000 mon.a (mon.0) 179 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:50.038258+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:50.038258+0000 mon.a (mon.0) 180 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:50.040153+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:50.040153+0000 mon.b (mon.1) 2 : cluster [INF] mon.b calling monitor election 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:51.026919+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:51.026919+0000 mon.a (mon.0) 181 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:51.451608+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:51.451608+0000 mgr.y (mgr.14150) 37 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:52.027310+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:52.027310+0000 mon.a (mon.0) 182 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:52.029178+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:52.029178+0000 mon.c (mon.2) 1 : cluster [INF] mon.c calling monitor election 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:53.027021+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:53.027021+0000 mon.a (mon.0) 183 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:53.451815+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:53.451815+0000 mgr.y (mgr.14150) 38 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:54.027192+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:54.027192+0000 mon.a (mon.0) 184 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.027411+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.027411+0000 mon.a (mon.0) 185 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.041001+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.041001+0000 mon.a (mon.0) 186 : cluster [INF] mon.a is new leader, mons a,b,c in quorum (ranks 0,1,2) 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044675+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044675+0000 mon.a (mon.0) 187 : cluster [DBG] monmap epoch 3 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044690+0000 mon.a (mon.0) 188 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044690+0000 mon.a (mon.0) 188 : cluster [DBG] fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044699+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-08T22:57:50.028159+0000 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044699+0000 mon.a (mon.0) 189 : cluster [DBG] last_changed 2026-03-08T22:57:50.028159+0000 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044707+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044707+0000 mon.a (mon.0) 190 : cluster [DBG] created 2026-03-08T22:56:48.853084+0000 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044716+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044716+0000 mon.a (mon.0) 191 : cluster [DBG] min_mon_release 19 (squid) 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044724+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044724+0000 mon.a (mon.0) 192 : cluster [DBG] election_strategy: 1 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044733+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044733+0000 mon.a (mon.0) 193 : cluster [DBG] 0: [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] mon.a 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044742+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044742+0000 mon.a (mon.0) 194 : cluster [DBG] 1: [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] mon.b 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044751+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] mon.c 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.044751+0000 mon.a (mon.0) 195 : cluster [DBG] 2: [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] mon.c 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.046112+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.046112+0000 mon.a (mon.0) 196 : cluster [DBG] fsmap 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.046135+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.046135+0000 mon.a (mon.0) 197 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.046309+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.046309+0000 mon.a (mon.0) 198 : cluster [DBG] mgrmap e13: y(active, since 43s) 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.049970+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: cluster 2026-03-08T22:57:55.049970+0000 mon.a (mon.0) 199 : cluster [INF] overall HEALTH_OK 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.054817+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.054817+0000 mon.a (mon.0) 200 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.059397+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.059397+0000 mon.a (mon.0) 201 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.065427+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.065427+0000 mon.a (mon.0) 202 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.068376+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.068376+0000 mon.a (mon.0) 203 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.071573+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.071573+0000 mon.a (mon.0) 204 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.072426+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.072426+0000 mon.a (mon.0) 205 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.073024+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:55.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:55 vm11 bash[23232]: audit 2026-03-08T22:57:55.073024+0000 mon.a (mon.0) 206 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:55.752 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T22:57:55.752 INFO:teuthology.orchestra.run.vm11.stdout:{"epoch":3,"fsid":"e2eb96e6-1b41-11f1-83e5-75f1b5373d30","modified":"2026-03-08T22:57:50.028159Z","created":"2026-03-08T22:56:48.853084Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3300","nonce":0},{"type":"v1","addr":"192.168.123.106:6789","nonce":0}]},"addr":"192.168.123.106:6789/0","public_addr":"192.168.123.106:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:3300","nonce":0},{"type":"v1","addr":"192.168.123.111:6789","nonce":0}]},"addr":"192.168.123.111:6789/0","public_addr":"192.168.123.111:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:3301","nonce":0},{"type":"v1","addr":"192.168.123.106:6790","nonce":0}]},"addr":"192.168.123.106:6790/0","public_addr":"192.168.123.106:6790/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-08T22:57:55.753 INFO:teuthology.orchestra.run.vm11.stderr:dumped monmap epoch 3 2026-03-08T22:57:55.805 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-08T22:57:55.806 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph config generate-minimal-conf 2026-03-08T22:57:56.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.073739+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:56.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.073739+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:56.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.073809+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:56.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.073809+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:56.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.121509+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.121509+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.131041+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.131041+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.173467+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.173467+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.178661+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.178661+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.184332+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.184332+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.188652+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.188652+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.193364+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.193364+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.208195+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.208195+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.211692+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.211692+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.215354+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.215354+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.218837+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.218837+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.219161+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.219161+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.219891+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.219891+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.220477+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.220477+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.220898+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.220898+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.221425+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm06 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.221425+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm06 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.624556+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.624556+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.629955+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.629955+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.631363+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.631363+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.632150+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.632150+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.632871+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.073739+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.073739+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.073809+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.073809+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.121509+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.121509+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.131041+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.131041+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.173467+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.173467+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.178661+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.178661+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.184332+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.184332+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.188652+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.188652+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.193364+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.193364+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.208195+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.208195+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.211692+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.211692+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.215354+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.215354+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.218837+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.218837+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.219161+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.219161+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.219891+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.219891+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.220477+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.220477+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.220898+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.220898+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.221425+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm06 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.221425+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm06 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.624556+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.624556+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.629955+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.629955+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.631363+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.631363+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.632150+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.632150+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.632871+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.632871+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.752137+0000 mon.a (mon.0) 224 : audit [DBG] from='client.? 192.168.123.111:0/3128811348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:55.752137+0000 mon.a (mon.0) 224 : audit [DBG] from='client.? 192.168.123.111:0/3128811348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.024572+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.024572+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.028922+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.028922+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.030014+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.030014+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.031282+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.031282+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.032581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.032581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.033050+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:56 vm06 bash[27746]: audit 2026-03-08T22:57:56.033050+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.632871+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.752137+0000 mon.a (mon.0) 224 : audit [DBG] from='client.? 192.168.123.111:0/3128811348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:56.282 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:55.752137+0000 mon.a (mon.0) 224 : audit [DBG] from='client.? 192.168.123.111:0/3128811348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.024572+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.024572+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.028922+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.028922+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.030014+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.030014+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.031282+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.031282+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.032581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.032581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.033050+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.283 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:56 vm06 bash[20625]: audit 2026-03-08T22:57:56.033050+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.073739+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.073739+0000 mgr.y (mgr.14150) 39 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.073809+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.073809+0000 mgr.y (mgr.14150) 40 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.121509+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.121509+0000 mgr.y (mgr.14150) 41 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.131041+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.131041+0000 mgr.y (mgr.14150) 42 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.173467+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.173467+0000 mon.a (mon.0) 207 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.178661+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.178661+0000 mon.a (mon.0) 208 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.184332+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.184332+0000 mon.a (mon.0) 209 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.188652+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.188652+0000 mon.a (mon.0) 210 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.193364+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.193364+0000 mon.a (mon.0) 211 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.208195+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.208195+0000 mon.a (mon.0) 212 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.211692+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.211692+0000 mon.a (mon.0) 213 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.215354+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.215354+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.218837+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.218837+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.219161+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.219161+0000 mgr.y (mgr.14150) 43 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.219891+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.219891+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.220477+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.220477+0000 mon.a (mon.0) 217 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.220898+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.220898+0000 mon.a (mon.0) 218 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.221425+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm06 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.221425+0000 mgr.y (mgr.14150) 44 : cephadm [INF] Reconfiguring daemon mon.c on vm06 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.624556+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.624556+0000 mon.a (mon.0) 219 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.629955+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.629955+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.631363+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.631363+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.632150+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.632150+0000 mon.a (mon.0) 222 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.632871+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.632871+0000 mon.a (mon.0) 223 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.752137+0000 mon.a (mon.0) 224 : audit [DBG] from='client.? 192.168.123.111:0/3128811348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:55.752137+0000 mon.a (mon.0) 224 : audit [DBG] from='client.? 192.168.123.111:0/3128811348' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.024572+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.024572+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.028922+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.028922+0000 mon.a (mon.0) 226 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.030014+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.030014+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.031282+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.031282+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.032581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.032581+0000 mon.a (mon.0) 229 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.033050+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:56.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:56 vm11 bash[23232]: audit 2026-03-08T22:57:56.033050+0000 mon.a (mon.0) 230 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:57:57 vm06 bash[20883]: debug 2026-03-08T22:57:57.025+0000 7fda6cfff640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cluster 2026-03-08T22:57:55.451980+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cluster 2026-03-08T22:57:55.451980+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.630820+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.630820+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.633611+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm06 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:55.633611+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm06 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:56.031035+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:56.031035+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:56.033665+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: cephadm 2026-03-08T22:57:56.033665+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.412171+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.412171+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.418372+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.418372+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.419738+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.419738+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.420687+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.420687+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.421136+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:57.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.421136+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.425189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:57 vm06 bash[20625]: audit 2026-03-08T22:57:56.425189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cluster 2026-03-08T22:57:55.451980+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cluster 2026-03-08T22:57:55.451980+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.630820+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.630820+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.633611+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm06 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:55.633611+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm06 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:56.031035+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:56.031035+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:56.033665+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: cephadm 2026-03-08T22:57:56.033665+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.412171+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.412171+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.418372+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.418372+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.419738+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.419738+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.420687+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.420687+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.421136+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.421136+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.425189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:57 vm06 bash[27746]: audit 2026-03-08T22:57:56.425189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cluster 2026-03-08T22:57:55.451980+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cluster 2026-03-08T22:57:55.451980+0000 mgr.y (mgr.14150) 45 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.630820+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.630820+0000 mgr.y (mgr.14150) 46 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.633611+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm06 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:55.633611+0000 mgr.y (mgr.14150) 47 : cephadm [INF] Reconfiguring daemon mon.a on vm06 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:56.031035+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:56.031035+0000 mgr.y (mgr.14150) 48 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:56.033665+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: cephadm 2026-03-08T22:57:56.033665+0000 mgr.y (mgr.14150) 49 : cephadm [INF] Reconfiguring daemon mon.b on vm11 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.412171+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.412171+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.418372+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.418372+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.419738+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.419738+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.420687+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.420687+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.421136+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.421136+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.425189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:57 vm11 bash[23232]: audit 2026-03-08T22:57:56.425189+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:57:59.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:59 vm06 bash[20625]: cluster 2026-03-08T22:57:57.452133+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:59.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:57:59 vm06 bash[20625]: cluster 2026-03-08T22:57:57.452133+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:59 vm06 bash[27746]: cluster 2026-03-08T22:57:57.452133+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:57:59 vm06 bash[27746]: cluster 2026-03-08T22:57:57.452133+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:59.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:59 vm11 bash[23232]: cluster 2026-03-08T22:57:57.452133+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:57:59.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:57:59 vm11 bash[23232]: cluster 2026-03-08T22:57:57.452133+0000 mgr.y (mgr.14150) 50 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:00.441 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:58:00.727 INFO:teuthology.orchestra.run.vm06.stdout:# minimal ceph.conf for e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:58:00.728 INFO:teuthology.orchestra.run.vm06.stdout:[global] 2026-03-08T22:58:00.728 INFO:teuthology.orchestra.run.vm06.stdout: fsid = e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T22:58:00.728 INFO:teuthology.orchestra.run.vm06.stdout: mon_host = [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] 2026-03-08T22:58:00.793 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-08T22:58:00.793 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:58:00.793 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.conf 2026-03-08T22:58:00.799 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:58:00.799 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:58:00.847 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:58:00.847 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/ceph.conf 2026-03-08T22:58:00.854 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:58:00.854 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-08T22:58:00.904 INFO:tasks.cephadm:Adding mgr.y on vm06 2026-03-08T22:58:00.904 INFO:tasks.cephadm:Adding mgr.x on vm11 2026-03-08T22:58:00.904 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply mgr '2;vm06=y;vm11=x' 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:01 vm06 bash[27746]: cluster 2026-03-08T22:57:59.452285+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:01 vm06 bash[27746]: cluster 2026-03-08T22:57:59.452285+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:01 vm06 bash[27746]: audit 2026-03-08T22:58:00.727724+0000 mon.a (mon.0) 237 : audit [DBG] from='client.? 192.168.123.106:0/1872127642' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:01 vm06 bash[27746]: audit 2026-03-08T22:58:00.727724+0000 mon.a (mon.0) 237 : audit [DBG] from='client.? 192.168.123.106:0/1872127642' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:01 vm06 bash[20625]: cluster 2026-03-08T22:57:59.452285+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:01 vm06 bash[20625]: cluster 2026-03-08T22:57:59.452285+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:01 vm06 bash[20625]: audit 2026-03-08T22:58:00.727724+0000 mon.a (mon.0) 237 : audit [DBG] from='client.? 192.168.123.106:0/1872127642' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:01.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:01 vm06 bash[20625]: audit 2026-03-08T22:58:00.727724+0000 mon.a (mon.0) 237 : audit [DBG] from='client.? 192.168.123.106:0/1872127642' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:01.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:01 vm11 bash[23232]: cluster 2026-03-08T22:57:59.452285+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:01.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:01 vm11 bash[23232]: cluster 2026-03-08T22:57:59.452285+0000 mgr.y (mgr.14150) 51 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:01.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:01 vm11 bash[23232]: audit 2026-03-08T22:58:00.727724+0000 mon.a (mon.0) 237 : audit [DBG] from='client.? 192.168.123.106:0/1872127642' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:01.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:01 vm11 bash[23232]: audit 2026-03-08T22:58:00.727724+0000 mon.a (mon.0) 237 : audit [DBG] from='client.? 192.168.123.106:0/1872127642' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:03.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:03 vm06 bash[20625]: cluster 2026-03-08T22:58:01.452456+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:03.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:03 vm06 bash[20625]: cluster 2026-03-08T22:58:01.452456+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:03.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:03 vm06 bash[27746]: cluster 2026-03-08T22:58:01.452456+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:03.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:03 vm06 bash[27746]: cluster 2026-03-08T22:58:01.452456+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:03.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:03 vm11 bash[23232]: cluster 2026-03-08T22:58:01.452456+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:03.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:03 vm11 bash[23232]: cluster 2026-03-08T22:58:01.452456+0000 mgr.y (mgr.14150) 52 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:04.546 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T22:58:04.792 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled mgr update... 2026-03-08T22:58:04.860 DEBUG:teuthology.orchestra.run.vm11:mgr.x> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.x.service 2026-03-08T22:58:04.861 INFO:tasks.cephadm:Deploying OSDs... 2026-03-08T22:58:04.861 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T22:58:04.861 DEBUG:teuthology.orchestra.run.vm06:> dd if=/scratch_devs of=/dev/stdout 2026-03-08T22:58:04.864 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:58:04.864 DEBUG:teuthology.orchestra.run.vm06:> ls /dev/[sv]d? 2026-03-08T22:58:04.907 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vda 2026-03-08T22:58:04.907 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdb 2026-03-08T22:58:04.907 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdc 2026-03-08T22:58:04.907 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vdd 2026-03-08T22:58:04.907 INFO:teuthology.orchestra.run.vm06.stdout:/dev/vde 2026-03-08T22:58:04.907 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-08T22:58:04.907 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-08T22:58:04.907 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdb 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdb 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-08 22:51:14.981956215 +0000 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-08 22:51:13.985956215 +0000 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-08 22:51:13.985956215 +0000 2026-03-08T22:58:04.951 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-08T22:58:04.951 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-08T22:58:04.999 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-08T22:58:04.999 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-08T22:58:04.999 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000178084 s, 2.9 MB/s 2026-03-08T22:58:05.000 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-08T22:58:05.048 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdc 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdc 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-08 22:51:14.989956215 +0000 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-08 22:51:13.981956215 +0000 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-08 22:51:13.981956215 +0000 2026-03-08T22:58:05.091 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-08T22:58:05.091 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-08T22:58:05.106 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.139 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-08T22:58:05.139 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-08T22:58:05.139 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000153377 s, 3.3 MB/s 2026-03-08T22:58:05.140 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-08T22:58:05.184 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vdd 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vdd 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-08 22:51:14.981956215 +0000 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-08 22:51:14.001956215 +0000 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-08 22:51:14.001956215 +0000 2026-03-08T22:58:05.231 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-08T22:58:05.231 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-08T22:58:05.260 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: cluster 2026-03-08T22:58:03.452607+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: cluster 2026-03-08T22:58:03.452607+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.791807+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.791807+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.793043+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.793043+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.794208+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.794208+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.794703+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.794703+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.798767+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.798767+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.800047+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.800047+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.801913+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.801913+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.803667+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.803667+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.804165+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:05 vm06 bash[20625]: audit 2026-03-08T22:58:04.804165+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: cluster 2026-03-08T22:58:03.452607+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: cluster 2026-03-08T22:58:03.452607+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.791807+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.261 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.791807+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.268 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-08T22:58:05.268 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-08T22:58:05.268 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000140052 s, 3.7 MB/s 2026-03-08T22:58:05.269 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-08T22:58:05.316 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vde 2026-03-08T22:58:05.358 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vde 2026-03-08T22:58:05.358 INFO:teuthology.orchestra.run.vm06.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.359 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-08T22:58:05.359 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.359 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-08 22:51:14.989956215 +0000 2026-03-08T22:58:05.359 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-08 22:51:13.993956215 +0000 2026-03-08T22:58:05.359 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-08 22:51:13.993956215 +0000 2026-03-08T22:58:05.359 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-08T22:58:05.359 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-08T22:58:05.359 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: cluster 2026-03-08T22:58:03.452607+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:05.359 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: cluster 2026-03-08T22:58:03.452607+0000 mgr.y (mgr.14150) 53 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:05.359 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.791807+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.359 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.791807+0000 mon.a (mon.0) 238 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.793043+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.793043+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.794208+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.794208+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.794703+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.794703+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.798767+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.798767+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.800047+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.800047+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.801913+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.801913+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.803667+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.803667+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.804165+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 bash[23232]: audit 2026-03-08T22:58:04.804165+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.360 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.406 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-08T22:58:05.406 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-08T22:58:05.406 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000169406 s, 3.0 MB/s 2026-03-08T22:58:05.406 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-08T22:58:05.452 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T22:58:05.452 DEBUG:teuthology.orchestra.run.vm11:> dd if=/scratch_devs of=/dev/stdout 2026-03-08T22:58:05.455 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:58:05.455 DEBUG:teuthology.orchestra.run.vm11:> ls /dev/[sv]d? 2026-03-08T22:58:05.498 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vda 2026-03-08T22:58:05.498 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vdb 2026-03-08T22:58:05.498 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vdc 2026-03-08T22:58:05.498 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vdd 2026-03-08T22:58:05.498 INFO:teuthology.orchestra.run.vm11.stdout:/dev/vde 2026-03-08T22:58:05.499 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-08T22:58:05.499 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-08T22:58:05.499 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vdb 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.793043+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.793043+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.794208+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.794208+0000 mon.a (mon.0) 240 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.794703+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.794703+0000 mon.a (mon.0) 241 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.798767+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.798767+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.800047+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.800047+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.801913+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.801913+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.803667+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.803667+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.804165+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:05 vm06 bash[27746]: audit 2026-03-08T22:58:04.804165+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vdb 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-08 22:51:40.152645794 +0000 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-08 22:51:39.092645794 +0000 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-08 22:51:39.092645794 +0000 2026-03-08T22:58:05.546 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-08T22:58:05.546 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-08T22:58:05.598 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-08T22:58:05.598 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-08T22:58:05.598 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.000215072 s, 2.4 MB/s 2026-03-08T22:58:05.599 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-08T22:58:05.647 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vdc 2026-03-08T22:58:05.651 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.651 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:05.651 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 systemd[1]: Started Ceph mgr.x for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vdc 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-08 22:51:40.160645794 +0000 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-08 22:51:39.052645794 +0000 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-08 22:51:39.052645794 +0000 2026-03-08T22:58:05.654 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-08T22:58:05.654 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-08T22:58:05.715 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-08T22:58:05.715 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-08T22:58:05.715 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.00160672 s, 319 kB/s 2026-03-08T22:58:05.715 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-08T22:58:05.773 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vdd 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vdd 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-08 22:51:40.152645794 +0000 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-08 22:51:39.044645794 +0000 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-08 22:51:39.044645794 +0000 2026-03-08T22:58:05.824 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-08T22:58:05.824 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-08T22:58:05.880 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 bash[24047]: debug 2026-03-08T22:58:05.783+0000 7f177095f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T22:58:05.881 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 bash[24047]: debug 2026-03-08T22:58:05.819+0000 7f177095f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T22:58:05.883 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-08T22:58:05.884 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-08T22:58:05.884 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.00206754 s, 248 kB/s 2026-03-08T22:58:05.888 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-08T22:58:05.937 DEBUG:teuthology.orchestra.run.vm11:> stat /dev/vde 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout: File: /dev/vde 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout:Access: 2026-03-08 22:51:40.160645794 +0000 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout:Modify: 2026-03-08 22:51:39.056645794 +0000 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout:Change: 2026-03-08 22:51:39.056645794 +0000 2026-03-08T22:58:05.983 INFO:teuthology.orchestra.run.vm11.stdout: Birth: - 2026-03-08T22:58:05.984 DEBUG:teuthology.orchestra.run.vm11:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-08T22:58:06.032 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records in 2026-03-08T22:58:06.032 INFO:teuthology.orchestra.run.vm11.stderr:1+0 records out 2026-03-08T22:58:06.032 INFO:teuthology.orchestra.run.vm11.stderr:512 bytes copied, 0.000174918 s, 2.9 MB/s 2026-03-08T22:58:06.033 DEBUG:teuthology.orchestra.run.vm11:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-08T22:58:06.082 INFO:tasks.cephadm:Deploying osd.0 on vm06 with /dev/vde... 2026-03-08T22:58:06.082 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vde 2026-03-08T22:58:06.222 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:05 vm11 bash[24047]: debug 2026-03-08T22:58:05.935+0000 7f177095f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:04.785000+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24107 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm06=y;vm11=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:04.785000+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24107 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm06=y;vm11=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: cephadm 2026-03-08T22:58:04.785771+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm06=y;vm11=x;count:2 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: cephadm 2026-03-08T22:58:04.785771+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm06=y;vm11=x;count:2 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: cephadm 2026-03-08T22:58:04.804721+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: cephadm 2026-03-08T22:58:04.804721+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.571557+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.571557+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.574876+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.574876+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.578197+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.578197+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.581918+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.581918+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.591076+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:06 vm06 bash[20625]: audit 2026-03-08T22:58:05.591076+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:04.785000+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24107 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm06=y;vm11=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:04.785000+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24107 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm06=y;vm11=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: cephadm 2026-03-08T22:58:04.785771+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm06=y;vm11=x;count:2 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: cephadm 2026-03-08T22:58:04.785771+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm06=y;vm11=x;count:2 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: cephadm 2026-03-08T22:58:04.804721+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: cephadm 2026-03-08T22:58:04.804721+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-08T22:58:06.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.571557+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.571557+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.574876+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.574876+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.578197+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.578197+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.581918+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.581918+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.591076+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:06.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:06 vm06 bash[27746]: audit 2026-03-08T22:58:05.591076+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:04.785000+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24107 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm06=y;vm11=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:04.785000+0000 mgr.y (mgr.14150) 54 : audit [DBG] from='client.24107 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm06=y;vm11=x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: cephadm 2026-03-08T22:58:04.785771+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm06=y;vm11=x;count:2 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: cephadm 2026-03-08T22:58:04.785771+0000 mgr.y (mgr.14150) 55 : cephadm [INF] Saving service mgr spec with placement vm06=y;vm11=x;count:2 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: cephadm 2026-03-08T22:58:04.804721+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: cephadm 2026-03-08T22:58:04.804721+0000 mgr.y (mgr.14150) 56 : cephadm [INF] Deploying daemon mgr.x on vm11 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.571557+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.571557+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.574876+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.574876+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.578197+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.578197+0000 mon.a (mon.0) 249 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.581918+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.581918+0000 mon.a (mon.0) 250 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.591076+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:06 vm11 bash[23232]: audit 2026-03-08T22:58:05.591076+0000 mon.a (mon.0) 251 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:06.558 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: debug 2026-03-08T22:58:06.219+0000 7f177095f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T22:58:06.995 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: debug 2026-03-08T22:58:06.655+0000 7f177095f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T22:58:06.996 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: debug 2026-03-08T22:58:06.735+0000 7f177095f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T22:58:06.996 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T22:58:06.996 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T22:58:06.996 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: from numpy import show_config as show_numpy_config 2026-03-08T22:58:06.996 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: debug 2026-03-08T22:58:06.855+0000 7f177095f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T22:58:06.996 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:06 vm11 bash[24047]: debug 2026-03-08T22:58:06.991+0000 7f177095f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T22:58:07.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.031+0000 7f177095f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T22:58:07.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.071+0000 7f177095f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T22:58:07.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.111+0000 7f177095f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T22:58:07.288 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.163+0000 7f177095f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T22:58:07.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:07 vm11 bash[23232]: cluster 2026-03-08T22:58:05.452741+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:07 vm11 bash[23232]: cluster 2026-03-08T22:58:05.452741+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:07.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:07 vm06 bash[20625]: cluster 2026-03-08T22:58:05.452741+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:07.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:07 vm06 bash[20625]: cluster 2026-03-08T22:58:05.452741+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:07 vm06 bash[27746]: cluster 2026-03-08T22:58:05.452741+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:07 vm06 bash[27746]: cluster 2026-03-08T22:58:05.452741+0000 mgr.y (mgr.14150) 57 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:07.852 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.595+0000 7f177095f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T22:58:07.852 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.631+0000 7f177095f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T22:58:07.852 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.667+0000 7f177095f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T22:58:07.852 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.807+0000 7f177095f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T22:58:08.135 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.847+0000 7f177095f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T22:58:08.135 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.887+0000 7f177095f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T22:58:08.135 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:07 vm11 bash[24047]: debug 2026-03-08T22:58:07.991+0000 7f177095f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:08 vm11 bash[23232]: cluster 2026-03-08T22:58:07.452992+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:08 vm11 bash[23232]: cluster 2026-03-08T22:58:07.452992+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:08 vm11 bash[24047]: debug 2026-03-08T22:58:08.131+0000 7f177095f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:08 vm11 bash[24047]: debug 2026-03-08T22:58:08.291+0000 7f177095f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:08 vm11 bash[24047]: debug 2026-03-08T22:58:08.323+0000 7f177095f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:08 vm11 bash[24047]: debug 2026-03-08T22:58:08.363+0000 7f177095f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T22:58:08.510 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:08 vm11 bash[24047]: debug 2026-03-08T22:58:08.507+0000 7f177095f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T22:58:08.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:08 vm06 bash[20625]: cluster 2026-03-08T22:58:07.452992+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:08.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:08 vm06 bash[20625]: cluster 2026-03-08T22:58:07.452992+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:08.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:08 vm06 bash[27746]: cluster 2026-03-08T22:58:07.452992+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:08.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:08 vm06 bash[27746]: cluster 2026-03-08T22:58:07.452992+0000 mgr.y (mgr.14150) 58 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:08.806 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 22:58:08 vm11 bash[24047]: debug 2026-03-08T22:58:08.715+0000 7f177095f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T22:58:09.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: cluster 2026-03-08T22:58:08.721420+0000 mon.a (mon.0) 252 : cluster [DBG] Standby manager daemon x started 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: cluster 2026-03-08T22:58:08.721420+0000 mon.a (mon.0) 252 : cluster [DBG] Standby manager daemon x started 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.722684+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.722684+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.723129+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.723129+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.723907+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.723907+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.724213+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T22:58:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:09 vm11 bash[23232]: audit 2026-03-08T22:58:08.724213+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: cluster 2026-03-08T22:58:08.721420+0000 mon.a (mon.0) 252 : cluster [DBG] Standby manager daemon x started 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: cluster 2026-03-08T22:58:08.721420+0000 mon.a (mon.0) 252 : cluster [DBG] Standby manager daemon x started 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.722684+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.722684+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.723129+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.723129+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.723907+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.723907+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.724213+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:09 vm06 bash[20625]: audit 2026-03-08T22:58:08.724213+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: cluster 2026-03-08T22:58:08.721420+0000 mon.a (mon.0) 252 : cluster [DBG] Standby manager daemon x started 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: cluster 2026-03-08T22:58:08.721420+0000 mon.a (mon.0) 252 : cluster [DBG] Standby manager daemon x started 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.722684+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.722684+0000 mon.b (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.723129+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.723129+0000 mon.b (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.723907+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.723907+0000 mon.b (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.724213+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T22:58:09.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:09 vm06 bash[27746]: audit 2026-03-08T22:58:08.724213+0000 mon.b (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.111:0/3275978447' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T22:58:10.701 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: cluster 2026-03-08T22:58:09.315604+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: cluster 2026-03-08T22:58:09.315604+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: audit 2026-03-08T22:58:09.315692+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: audit 2026-03-08T22:58:09.315692+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: cluster 2026-03-08T22:58:09.453208+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: cluster 2026-03-08T22:58:09.453208+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: audit 2026-03-08T22:58:10.064937+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:10 vm06 bash[20625]: audit 2026-03-08T22:58:10.064937+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: cluster 2026-03-08T22:58:09.315604+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: cluster 2026-03-08T22:58:09.315604+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: audit 2026-03-08T22:58:09.315692+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: audit 2026-03-08T22:58:09.315692+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: cluster 2026-03-08T22:58:09.453208+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: cluster 2026-03-08T22:58:09.453208+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: audit 2026-03-08T22:58:10.064937+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:10.725 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:10 vm06 bash[27746]: audit 2026-03-08T22:58:10.064937+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: cluster 2026-03-08T22:58:09.315604+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: cluster 2026-03-08T22:58:09.315604+0000 mon.a (mon.0) 253 : cluster [DBG] mgrmap e14: y(active, since 57s), standbys: x 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: audit 2026-03-08T22:58:09.315692+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: audit 2026-03-08T22:58:09.315692+0000 mon.a (mon.0) 254 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: cluster 2026-03-08T22:58:09.453208+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: cluster 2026-03-08T22:58:09.453208+0000 mgr.y (mgr.14150) 59 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: audit 2026-03-08T22:58:10.064937+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:10.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:10 vm11 bash[23232]: audit 2026-03-08T22:58:10.064937+0000 mon.a (mon.0) 255 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.545 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:58:11.562 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm06:/dev/vde 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.549757+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.549757+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.556656+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.556656+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.557564+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.557564+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.557966+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.557966+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.561808+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.561808+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: cephadm 2026-03-08T22:58:10.572391+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: cephadm 2026-03-08T22:58:10.572391+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.572681+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.572681+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.573297+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.573297+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.573746+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: audit 2026-03-08T22:58:10.573746+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: cephadm 2026-03-08T22:58:10.574307+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T22:58:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:11 vm06 bash[20625]: cephadm 2026-03-08T22:58:10.574307+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.549757+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.549757+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.556656+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.556656+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.557564+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.557564+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.557966+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.557966+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.561808+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.561808+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: cephadm 2026-03-08T22:58:10.572391+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: cephadm 2026-03-08T22:58:10.572391+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.572681+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.572681+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.573297+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.573297+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.573746+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: audit 2026-03-08T22:58:10.573746+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: cephadm 2026-03-08T22:58:10.574307+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T22:58:11.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:11 vm06 bash[27746]: cephadm 2026-03-08T22:58:10.574307+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.549757+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.549757+0000 mon.a (mon.0) 256 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.556656+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.556656+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.557564+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.557564+0000 mon.a (mon.0) 258 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.557966+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.557966+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.561808+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.561808+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: cephadm 2026-03-08T22:58:10.572391+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: cephadm 2026-03-08T22:58:10.572391+0000 mgr.y (mgr.14150) 60 : cephadm [INF] Reconfiguring mgr.y (unknown last config time)... 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.572681+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.572681+0000 mon.a (mon.0) 261 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.573297+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.573297+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.573746+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: audit 2026-03-08T22:58:10.573746+0000 mon.a (mon.0) 263 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: cephadm 2026-03-08T22:58:10.574307+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T22:58:12.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:11 vm11 bash[23232]: cephadm 2026-03-08T22:58:10.574307+0000 mgr.y (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: cluster 2026-03-08T22:58:11.453399+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: cluster 2026-03-08T22:58:11.453399+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.738185+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.738185+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.742391+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.742391+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.743585+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.743585+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.745231+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.745231+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.745641+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.745641+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.749221+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:12 vm06 bash[20625]: audit 2026-03-08T22:58:11.749221+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: cluster 2026-03-08T22:58:11.453399+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: cluster 2026-03-08T22:58:11.453399+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.738185+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.738185+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.742391+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.742391+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.743585+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.743585+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.745231+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.745231+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.745641+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.745641+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.749221+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:12 vm06 bash[27746]: audit 2026-03-08T22:58:11.749221+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: cluster 2026-03-08T22:58:11.453399+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: cluster 2026-03-08T22:58:11.453399+0000 mgr.y (mgr.14150) 62 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.738185+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.738185+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.742391+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.742391+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.743585+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.743585+0000 mon.a (mon.0) 266 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.745231+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.745231+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.745641+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.745641+0000 mon.a (mon.0) 268 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.749221+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:12 vm11 bash[23232]: audit 2026-03-08T22:58:11.749221+0000 mon.a (mon.0) 269 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:15.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:14 vm06 bash[20625]: cluster 2026-03-08T22:58:13.453669+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:15.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:14 vm06 bash[20625]: cluster 2026-03-08T22:58:13.453669+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:14 vm06 bash[27746]: cluster 2026-03-08T22:58:13.453669+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:14 vm06 bash[27746]: cluster 2026-03-08T22:58:13.453669+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:14 vm11 bash[23232]: cluster 2026-03-08T22:58:13.453669+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:14 vm11 bash[23232]: cluster 2026-03-08T22:58:13.453669+0000 mgr.y (mgr.14150) 63 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:16.219 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: cluster 2026-03-08T22:58:15.453959+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: cluster 2026-03-08T22:58:15.453959+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: audit 2026-03-08T22:58:16.471626+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: audit 2026-03-08T22:58:16.471626+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: audit 2026-03-08T22:58:16.472884+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: audit 2026-03-08T22:58:16.472884+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: audit 2026-03-08T22:58:16.473317+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:16 vm11 bash[23232]: audit 2026-03-08T22:58:16.473317+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: cluster 2026-03-08T22:58:15.453959+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: cluster 2026-03-08T22:58:15.453959+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: audit 2026-03-08T22:58:16.471626+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: audit 2026-03-08T22:58:16.471626+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: audit 2026-03-08T22:58:16.472884+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: audit 2026-03-08T22:58:16.472884+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: audit 2026-03-08T22:58:16.473317+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:16 vm06 bash[20625]: audit 2026-03-08T22:58:16.473317+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: cluster 2026-03-08T22:58:15.453959+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: cluster 2026-03-08T22:58:15.453959+0000 mgr.y (mgr.14150) 64 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: audit 2026-03-08T22:58:16.471626+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: audit 2026-03-08T22:58:16.471626+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: audit 2026-03-08T22:58:16.472884+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: audit 2026-03-08T22:58:16.472884+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: audit 2026-03-08T22:58:16.473317+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:17.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:16 vm06 bash[27746]: audit 2026-03-08T22:58:16.473317+0000 mon.a (mon.0) 272 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:18.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:17 vm11 bash[23232]: audit 2026-03-08T22:58:16.470218+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:18.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:17 vm11 bash[23232]: audit 2026-03-08T22:58:16.470218+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:18.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:17 vm06 bash[20625]: audit 2026-03-08T22:58:16.470218+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:18.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:17 vm06 bash[20625]: audit 2026-03-08T22:58:16.470218+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:18.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:17 vm06 bash[27746]: audit 2026-03-08T22:58:16.470218+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:18.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:17 vm06 bash[27746]: audit 2026-03-08T22:58:16.470218+0000 mgr.y (mgr.14150) 65 : audit [DBG] from='client.24119 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:19.056 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:18 vm11 bash[23232]: cluster 2026-03-08T22:58:17.454156+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:18 vm11 bash[23232]: cluster 2026-03-08T22:58:17.454156+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:19.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:18 vm06 bash[20625]: cluster 2026-03-08T22:58:17.454156+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:19.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:18 vm06 bash[20625]: cluster 2026-03-08T22:58:17.454156+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:19.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:18 vm06 bash[27746]: cluster 2026-03-08T22:58:17.454156+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:19.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:18 vm06 bash[27746]: cluster 2026-03-08T22:58:17.454156+0000 mgr.y (mgr.14150) 66 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:20 vm11 bash[23232]: cluster 2026-03-08T22:58:19.454391+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:20 vm11 bash[23232]: cluster 2026-03-08T22:58:19.454391+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:21.261 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:20 vm06 bash[27746]: cluster 2026-03-08T22:58:19.454391+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:21.261 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:20 vm06 bash[27746]: cluster 2026-03-08T22:58:19.454391+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:21.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:20 vm06 bash[20625]: cluster 2026-03-08T22:58:19.454391+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:21.261 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:20 vm06 bash[20625]: cluster 2026-03-08T22:58:19.454391+0000 mgr.y (mgr.14150) 67 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: cluster 2026-03-08T22:58:21.454649+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: cluster 2026-03-08T22:58:21.454649+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.840846+0000 mon.c (mon.2) 2 : audit [INF] from='client.? 192.168.123.106:0/2829402309' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.840846+0000 mon.c (mon.2) 2 : audit [INF] from='client.? 192.168.123.106:0/2829402309' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.841307+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.841307+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.844093+0000 mon.a (mon.0) 274 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]': finished 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.844093+0000 mon.a (mon.0) 274 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]': finished 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: cluster 2026-03-08T22:58:21.846537+0000 mon.a (mon.0) 275 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: cluster 2026-03-08T22:58:21.846537+0000 mon.a (mon.0) 275 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.846677+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:21.846677+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:22.460490+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.106:0/228488653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:22 vm11 bash[23232]: audit 2026-03-08T22:58:22.460490+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.106:0/228488653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: cluster 2026-03-08T22:58:21.454649+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: cluster 2026-03-08T22:58:21.454649+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.840846+0000 mon.c (mon.2) 2 : audit [INF] from='client.? 192.168.123.106:0/2829402309' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.840846+0000 mon.c (mon.2) 2 : audit [INF] from='client.? 192.168.123.106:0/2829402309' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.841307+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.841307+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.844093+0000 mon.a (mon.0) 274 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]': finished 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.844093+0000 mon.a (mon.0) 274 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]': finished 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: cluster 2026-03-08T22:58:21.846537+0000 mon.a (mon.0) 275 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: cluster 2026-03-08T22:58:21.846537+0000 mon.a (mon.0) 275 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.846677+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:21.846677+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:22.460490+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.106:0/228488653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:22 vm06 bash[20625]: audit 2026-03-08T22:58:22.460490+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.106:0/228488653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: cluster 2026-03-08T22:58:21.454649+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: cluster 2026-03-08T22:58:21.454649+0000 mgr.y (mgr.14150) 68 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.840846+0000 mon.c (mon.2) 2 : audit [INF] from='client.? 192.168.123.106:0/2829402309' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.840846+0000 mon.c (mon.2) 2 : audit [INF] from='client.? 192.168.123.106:0/2829402309' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.841307+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.841307+0000 mon.a (mon.0) 273 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.844093+0000 mon.a (mon.0) 274 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]': finished 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.844093+0000 mon.a (mon.0) 274 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f584135b-773d-4be0-b5f4-b849576faa2e"}]': finished 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: cluster 2026-03-08T22:58:21.846537+0000 mon.a (mon.0) 275 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: cluster 2026-03-08T22:58:21.846537+0000 mon.a (mon.0) 275 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.846677+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:21.846677+0000 mon.a (mon.0) 276 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:23.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:22.460490+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.106:0/228488653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:23.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:22 vm06 bash[27746]: audit 2026-03-08T22:58:22.460490+0000 mon.a (mon.0) 277 : audit [DBG] from='client.? 192.168.123.106:0/228488653' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:25.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:24 vm06 bash[20625]: cluster 2026-03-08T22:58:23.454921+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:25.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:24 vm06 bash[20625]: cluster 2026-03-08T22:58:23.454921+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:25.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:24 vm06 bash[27746]: cluster 2026-03-08T22:58:23.454921+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:25.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:24 vm06 bash[27746]: cluster 2026-03-08T22:58:23.454921+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:25.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:24 vm11 bash[23232]: cluster 2026-03-08T22:58:23.454921+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:25.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:24 vm11 bash[23232]: cluster 2026-03-08T22:58:23.454921+0000 mgr.y (mgr.14150) 69 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:27.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:26 vm06 bash[20625]: cluster 2026-03-08T22:58:25.455153+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:27.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:26 vm06 bash[20625]: cluster 2026-03-08T22:58:25.455153+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:27.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:26 vm06 bash[27746]: cluster 2026-03-08T22:58:25.455153+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:27.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:26 vm06 bash[27746]: cluster 2026-03-08T22:58:25.455153+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:27.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:26 vm11 bash[23232]: cluster 2026-03-08T22:58:25.455153+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:27.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:26 vm11 bash[23232]: cluster 2026-03-08T22:58:25.455153+0000 mgr.y (mgr.14150) 70 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:29.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:28 vm06 bash[20625]: cluster 2026-03-08T22:58:27.455341+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:29.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:28 vm06 bash[20625]: cluster 2026-03-08T22:58:27.455341+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:29.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:28 vm06 bash[27746]: cluster 2026-03-08T22:58:27.455341+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:29.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:28 vm06 bash[27746]: cluster 2026-03-08T22:58:27.455341+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:29.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:28 vm11 bash[23232]: cluster 2026-03-08T22:58:27.455341+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:28 vm11 bash[23232]: cluster 2026-03-08T22:58:27.455341+0000 mgr.y (mgr.14150) 71 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:30 vm06 bash[20625]: cluster 2026-03-08T22:58:29.455621+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:30 vm06 bash[20625]: cluster 2026-03-08T22:58:29.455621+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:30 vm06 bash[27746]: cluster 2026-03-08T22:58:29.455621+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:30 vm06 bash[27746]: cluster 2026-03-08T22:58:29.455621+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:30 vm11 bash[23232]: cluster 2026-03-08T22:58:29.455621+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:30 vm11 bash[23232]: cluster 2026-03-08T22:58:29.455621+0000 mgr.y (mgr.14150) 72 : cluster [DBG] pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:31.988 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 bash[20625]: audit 2026-03-08T22:58:31.208016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 bash[20625]: audit 2026-03-08T22:58:31.208016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 bash[20625]: audit 2026-03-08T22:58:31.208644+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 bash[20625]: audit 2026-03-08T22:58:31.208644+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 bash[20625]: cephadm 2026-03-08T22:58:31.209173+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm06 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:31 vm06 bash[20625]: cephadm 2026-03-08T22:58:31.209173+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm06 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:58:31 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 bash[27746]: audit 2026-03-08T22:58:31.208016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 bash[27746]: audit 2026-03-08T22:58:31.208016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 bash[27746]: audit 2026-03-08T22:58:31.208644+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 bash[27746]: audit 2026-03-08T22:58:31.208644+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 bash[27746]: cephadm 2026-03-08T22:58:31.209173+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm06 2026-03-08T22:58:31.989 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:31 vm06 bash[27746]: cephadm 2026-03-08T22:58:31.209173+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm06 2026-03-08T22:58:32.247 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:32.247 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:58:32 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:32.247 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:58:32.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:31 vm11 bash[23232]: audit 2026-03-08T22:58:31.208016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T22:58:32.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:31 vm11 bash[23232]: audit 2026-03-08T22:58:31.208016+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-08T22:58:32.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:31 vm11 bash[23232]: audit 2026-03-08T22:58:31.208644+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:32.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:31 vm11 bash[23232]: audit 2026-03-08T22:58:31.208644+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:32.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:31 vm11 bash[23232]: cephadm 2026-03-08T22:58:31.209173+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm06 2026-03-08T22:58:32.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:31 vm11 bash[23232]: cephadm 2026-03-08T22:58:31.209173+0000 mgr.y (mgr.14150) 73 : cephadm [INF] Deploying daemon osd.0 on vm06 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: cluster 2026-03-08T22:58:31.455873+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: cluster 2026-03-08T22:58:31.455873+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: audit 2026-03-08T22:58:32.203738+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: audit 2026-03-08T22:58:32.203738+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: audit 2026-03-08T22:58:32.209277+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: audit 2026-03-08T22:58:32.209277+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: audit 2026-03-08T22:58:32.217572+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:32 vm06 bash[20625]: audit 2026-03-08T22:58:32.217572+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: cluster 2026-03-08T22:58:31.455873+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: cluster 2026-03-08T22:58:31.455873+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: audit 2026-03-08T22:58:32.203738+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: audit 2026-03-08T22:58:32.203738+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: audit 2026-03-08T22:58:32.209277+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: audit 2026-03-08T22:58:32.209277+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: audit 2026-03-08T22:58:32.217572+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:32 vm06 bash[27746]: audit 2026-03-08T22:58:32.217572+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: cluster 2026-03-08T22:58:31.455873+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:33.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: cluster 2026-03-08T22:58:31.455873+0000 mgr.y (mgr.14150) 74 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: audit 2026-03-08T22:58:32.203738+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: audit 2026-03-08T22:58:32.203738+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: audit 2026-03-08T22:58:32.209277+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: audit 2026-03-08T22:58:32.209277+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: audit 2026-03-08T22:58:32.217572+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:32 vm11 bash[23232]: audit 2026-03-08T22:58:32.217572+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:35.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:34 vm06 bash[20625]: cluster 2026-03-08T22:58:33.456090+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:35.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:34 vm06 bash[20625]: cluster 2026-03-08T22:58:33.456090+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:35.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:34 vm06 bash[27746]: cluster 2026-03-08T22:58:33.456090+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:35.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:34 vm06 bash[27746]: cluster 2026-03-08T22:58:33.456090+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:35.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:34 vm11 bash[23232]: cluster 2026-03-08T22:58:33.456090+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:35.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:34 vm11 bash[23232]: cluster 2026-03-08T22:58:33.456090+0000 mgr.y (mgr.14150) 75 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:35 vm06 bash[20625]: audit 2026-03-08T22:58:35.528306+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T22:58:36.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:35 vm06 bash[20625]: audit 2026-03-08T22:58:35.528306+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T22:58:36.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:35 vm06 bash[27746]: audit 2026-03-08T22:58:35.528306+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T22:58:36.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:35 vm06 bash[27746]: audit 2026-03-08T22:58:35.528306+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T22:58:36.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:35 vm11 bash[23232]: audit 2026-03-08T22:58:35.528306+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T22:58:36.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:35 vm11 bash[23232]: audit 2026-03-08T22:58:35.528306+0000 mon.a (mon.0) 283 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: cluster 2026-03-08T22:58:35.456336+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: cluster 2026-03-08T22:58:35.456336+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: audit 2026-03-08T22:58:35.969717+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: audit 2026-03-08T22:58:35.969717+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: cluster 2026-03-08T22:58:35.976310+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: cluster 2026-03-08T22:58:35.976310+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: audit 2026-03-08T22:58:35.976465+0000 mon.a (mon.0) 286 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: audit 2026-03-08T22:58:35.976465+0000 mon.a (mon.0) 286 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: audit 2026-03-08T22:58:35.976552+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:37 vm06 bash[20625]: audit 2026-03-08T22:58:35.976552+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: cluster 2026-03-08T22:58:35.456336+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: cluster 2026-03-08T22:58:35.456336+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: audit 2026-03-08T22:58:35.969717+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: audit 2026-03-08T22:58:35.969717+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: cluster 2026-03-08T22:58:35.976310+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: cluster 2026-03-08T22:58:35.976310+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: audit 2026-03-08T22:58:35.976465+0000 mon.a (mon.0) 286 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: audit 2026-03-08T22:58:35.976465+0000 mon.a (mon.0) 286 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:58:38.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: audit 2026-03-08T22:58:35.976552+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:38.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:37 vm06 bash[27746]: audit 2026-03-08T22:58:35.976552+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: cluster 2026-03-08T22:58:35.456336+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: cluster 2026-03-08T22:58:35.456336+0000 mgr.y (mgr.14150) 76 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: audit 2026-03-08T22:58:35.969717+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: audit 2026-03-08T22:58:35.969717+0000 mon.a (mon.0) 284 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: cluster 2026-03-08T22:58:35.976310+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: cluster 2026-03-08T22:58:35.976310+0000 mon.a (mon.0) 285 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: audit 2026-03-08T22:58:35.976465+0000 mon.a (mon.0) 286 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: audit 2026-03-08T22:58:35.976465+0000 mon.a (mon.0) 286 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: audit 2026-03-08T22:58:35.976552+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:38.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:37 vm11 bash[23232]: audit 2026-03-08T22:58:35.976552+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:36.480819+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:36.480819+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:36.480872+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:36.480872+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:37.274082+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:37.274082+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:37.374713+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:37.374713+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:37.456547+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: cluster 2026-03-08T22:58:37.456547+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:37.493604+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:37.493604+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:37.496662+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:37.496662+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:38.382076+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:38.382076+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:38.413557+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:38.413557+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:38.496007+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:38 vm06 bash[20625]: audit 2026-03-08T22:58:38.496007+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:36.480819+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:36.480819+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:36.480872+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:36.480872+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:37.274082+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:37.274082+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:37.374713+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:37.374713+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:37.456547+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: cluster 2026-03-08T22:58:37.456547+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:37.493604+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:37.493604+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:37.496662+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:37.496662+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:38.382076+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:38.382076+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:38.413557+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:38.413557+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:38.496007+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:38 vm06 bash[27746]: audit 2026-03-08T22:58:38.496007+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:36.480819+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:36.480819+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:36.480872+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:36.480872+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:37.274082+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:37.274082+0000 mon.a (mon.0) 288 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:37.374713+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:37.374713+0000 mon.a (mon.0) 289 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:37.456547+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: cluster 2026-03-08T22:58:37.456547+0000 mgr.y (mgr.14150) 77 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:37.493604+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:37.493604+0000 mon.a (mon.0) 290 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:37.496662+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:37.496662+0000 mon.a (mon.0) 291 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:38.382076+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:38.382076+0000 mon.a (mon.0) 292 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:38.413557+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:38.413557+0000 mon.a (mon.0) 293 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:38.496007+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:38 vm11 bash[23232]: audit 2026-03-08T22:58:38.496007+0000 mon.a (mon.0) 294 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:39.703 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 0 on host 'vm06' 2026-03-08T22:58:39.785 DEBUG:teuthology.orchestra.run.vm06:osd.0> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.0.service 2026-03-08T22:58:39.786 INFO:tasks.cephadm:Deploying osd.1 on vm06 with /dev/vdd... 2026-03-08T22:58:39.786 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vdd 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:38.793454+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:38.793454+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:38.793975+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:38.793975+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:38.813992+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:38.813992+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:39.302980+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:39.302980+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:39.496256+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:39 vm06 bash[20625]: audit 2026-03-08T22:58:39.496256+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:38.793454+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:38.793454+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:38.793975+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:40.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:38.793975+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:40.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:38.813992+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:40.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:38.813992+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:40.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:39.302980+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' 2026-03-08T22:58:40.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:39.302980+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' 2026-03-08T22:58:40.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:39.496256+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:40.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:39 vm06 bash[27746]: audit 2026-03-08T22:58:39.496256+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:38.793454+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:38.793454+0000 mon.a (mon.0) 295 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:38.793975+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:38.793975+0000 mon.a (mon.0) 296 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:38.813992+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:38.813992+0000 mon.a (mon.0) 297 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:39.302980+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:39.302980+0000 mon.a (mon.0) 298 : audit [INF] from='osd.0 v2:192.168.123.106:6801/1756339851' entity='osd.0' 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:39.496256+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:39 vm11 bash[23232]: audit 2026-03-08T22:58:39.496256+0000 mon.a (mon.0) 299 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: cluster 2026-03-08T22:58:39.456786+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: cluster 2026-03-08T22:58:39.456786+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:39.686373+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:39.686373+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:39.691560+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:39.691560+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:39.698634+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:39.698634+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: cluster 2026-03-08T22:58:40.309717+0000 mon.a (mon.0) 303 : cluster [INF] osd.0 v2:192.168.123.106:6801/1756339851 boot 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: cluster 2026-03-08T22:58:40.309717+0000 mon.a (mon.0) 303 : cluster [INF] osd.0 v2:192.168.123.106:6801/1756339851 boot 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: cluster 2026-03-08T22:58:40.309886+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: cluster 2026-03-08T22:58:40.309886+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:40.310586+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:40 vm06 bash[20625]: audit 2026-03-08T22:58:40.310586+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:41.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: cluster 2026-03-08T22:58:39.456786+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: cluster 2026-03-08T22:58:39.456786+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:39.686373+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:39.686373+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:39.691560+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:39.691560+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:39.698634+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:39.698634+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: cluster 2026-03-08T22:58:40.309717+0000 mon.a (mon.0) 303 : cluster [INF] osd.0 v2:192.168.123.106:6801/1756339851 boot 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: cluster 2026-03-08T22:58:40.309717+0000 mon.a (mon.0) 303 : cluster [INF] osd.0 v2:192.168.123.106:6801/1756339851 boot 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: cluster 2026-03-08T22:58:40.309886+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: cluster 2026-03-08T22:58:40.309886+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:40.310586+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:41.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:40 vm06 bash[27746]: audit 2026-03-08T22:58:40.310586+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: cluster 2026-03-08T22:58:39.456786+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: cluster 2026-03-08T22:58:39.456786+0000 mgr.y (mgr.14150) 78 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:39.686373+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:39.686373+0000 mon.a (mon.0) 300 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:39.691560+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:39.691560+0000 mon.a (mon.0) 301 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:39.698634+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:39.698634+0000 mon.a (mon.0) 302 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: cluster 2026-03-08T22:58:40.309717+0000 mon.a (mon.0) 303 : cluster [INF] osd.0 v2:192.168.123.106:6801/1756339851 boot 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: cluster 2026-03-08T22:58:40.309717+0000 mon.a (mon.0) 303 : cluster [INF] osd.0 v2:192.168.123.106:6801/1756339851 boot 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: cluster 2026-03-08T22:58:40.309886+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: cluster 2026-03-08T22:58:40.309886+0000 mon.a (mon.0) 304 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:40.310586+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:40 vm11 bash[23232]: audit 2026-03-08T22:58:40.310586+0000 mon.a (mon.0) 305 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T22:58:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:42 vm06 bash[20625]: cluster 2026-03-08T22:58:41.317701+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T22:58:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:42 vm06 bash[20625]: cluster 2026-03-08T22:58:41.317701+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T22:58:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:42 vm06 bash[20625]: cluster 2026-03-08T22:58:41.457007+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:42.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:42 vm06 bash[20625]: cluster 2026-03-08T22:58:41.457007+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:42.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:42 vm06 bash[27746]: cluster 2026-03-08T22:58:41.317701+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T22:58:42.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:42 vm06 bash[27746]: cluster 2026-03-08T22:58:41.317701+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T22:58:42.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:42 vm06 bash[27746]: cluster 2026-03-08T22:58:41.457007+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:42.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:42 vm06 bash[27746]: cluster 2026-03-08T22:58:41.457007+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:42.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:42 vm11 bash[23232]: cluster 2026-03-08T22:58:41.317701+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T22:58:42.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:42 vm11 bash[23232]: cluster 2026-03-08T22:58:41.317701+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-08T22:58:42.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:42 vm11 bash[23232]: cluster 2026-03-08T22:58:41.457007+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:42.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:42 vm11 bash[23232]: cluster 2026-03-08T22:58:41.457007+0000 mgr.y (mgr.14150) 79 : cluster [DBG] pgmap v44: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:44.445 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:58:44.729 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:44 vm06 bash[20625]: cluster 2026-03-08T22:58:43.457319+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:44.729 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:44 vm06 bash[20625]: cluster 2026-03-08T22:58:43.457319+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:44.729 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:44 vm06 bash[27746]: cluster 2026-03-08T22:58:43.457319+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:44.729 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:44 vm06 bash[27746]: cluster 2026-03-08T22:58:43.457319+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:44 vm11 bash[23232]: cluster 2026-03-08T22:58:43.457319+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:44 vm11 bash[23232]: cluster 2026-03-08T22:58:43.457319+0000 mgr.y (mgr.14150) 80 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:46.085 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:58:46.097 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm06:/dev/vdd 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: cephadm 2026-03-08T22:58:45.375495+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: cephadm 2026-03-08T22:58:45.375495+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.381404+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.381404+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.388582+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.388582+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.392361+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.392361+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.393111+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.393111+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.393533+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.393533+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.401454+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: audit 2026-03-08T22:58:45.401454+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: cluster 2026-03-08T22:58:45.457569+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:46 vm06 bash[20625]: cluster 2026-03-08T22:58:45.457569+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: cephadm 2026-03-08T22:58:45.375495+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: cephadm 2026-03-08T22:58:45.375495+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.381404+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.381404+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.388582+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.388582+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.392361+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.392361+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.393111+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.393111+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.393533+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.393533+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.401454+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: audit 2026-03-08T22:58:45.401454+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: cluster 2026-03-08T22:58:45.457569+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:46.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:46 vm06 bash[27746]: cluster 2026-03-08T22:58:45.457569+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: cephadm 2026-03-08T22:58:45.375495+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: cephadm 2026-03-08T22:58:45.375495+0000 mgr.y (mgr.14150) 81 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.381404+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.381404+0000 mon.a (mon.0) 307 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.388582+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.388582+0000 mon.a (mon.0) 308 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.392361+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.392361+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.393111+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.393111+0000 mon.a (mon.0) 310 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.393533+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.393533+0000 mon.a (mon.0) 311 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.401454+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: audit 2026-03-08T22:58:45.401454+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: cluster 2026-03-08T22:58:45.457569+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:46.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:46 vm11 bash[23232]: cluster 2026-03-08T22:58:45.457569+0000 mgr.y (mgr.14150) 82 : cluster [DBG] pgmap v46: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:48.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:48 vm06 bash[20625]: cluster 2026-03-08T22:58:47.457804+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:48.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:48 vm06 bash[20625]: cluster 2026-03-08T22:58:47.457804+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:48.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:48 vm06 bash[27746]: cluster 2026-03-08T22:58:47.457804+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:48.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:48 vm06 bash[27746]: cluster 2026-03-08T22:58:47.457804+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:48.806 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:48 vm11 bash[23232]: cluster 2026-03-08T22:58:47.457804+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:48.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:48 vm11 bash[23232]: cluster 2026-03-08T22:58:47.457804+0000 mgr.y (mgr.14150) 83 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:50.712 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:58:50.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:50 vm06 bash[20625]: cluster 2026-03-08T22:58:49.458109+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:50.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:50 vm06 bash[20625]: cluster 2026-03-08T22:58:49.458109+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:50.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:50 vm06 bash[27746]: cluster 2026-03-08T22:58:49.458109+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:50.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:50 vm06 bash[27746]: cluster 2026-03-08T22:58:49.458109+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:50.806 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:50 vm11 bash[23232]: cluster 2026-03-08T22:58:49.458109+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:50.806 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:50 vm11 bash[23232]: cluster 2026-03-08T22:58:49.458109+0000 mgr.y (mgr.14150) 84 : cluster [DBG] pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.967708+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.967708+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.968950+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.968950+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.970307+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.970307+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.970736+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:51.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:51 vm11 bash[23232]: audit 2026-03-08T22:58:50.970736+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.967708+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.967708+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.968950+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.968950+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.970307+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.970307+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.970736+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:51 vm06 bash[20625]: audit 2026-03-08T22:58:50.970736+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.967708+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.967708+0000 mgr.y (mgr.14150) 85 : audit [DBG] from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.968950+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.968950+0000 mon.a (mon.0) 313 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.970307+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.970307+0000 mon.a (mon.0) 314 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.970736+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:52.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:51 vm06 bash[27746]: audit 2026-03-08T22:58:50.970736+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:58:52.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:52 vm11 bash[23232]: cluster 2026-03-08T22:58:51.458378+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:52.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:52 vm11 bash[23232]: cluster 2026-03-08T22:58:51.458378+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:53.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:52 vm06 bash[20625]: cluster 2026-03-08T22:58:51.458378+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:53.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:52 vm06 bash[20625]: cluster 2026-03-08T22:58:51.458378+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:53.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:52 vm06 bash[27746]: cluster 2026-03-08T22:58:51.458378+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:53.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:52 vm06 bash[27746]: cluster 2026-03-08T22:58:51.458378+0000 mgr.y (mgr.14150) 86 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:54.799 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:54 vm06 bash[20625]: cluster 2026-03-08T22:58:53.458689+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:54.799 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:54 vm06 bash[20625]: cluster 2026-03-08T22:58:53.458689+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:54.799 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:54 vm06 bash[27746]: cluster 2026-03-08T22:58:53.458689+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:54.799 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:54 vm06 bash[27746]: cluster 2026-03-08T22:58:53.458689+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:54.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:54 vm11 bash[23232]: cluster 2026-03-08T22:58:53.458689+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:54.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:54 vm11 bash[23232]: cluster 2026-03-08T22:58:53.458689+0000 mgr.y (mgr.14150) 87 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.272775+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.106:0/2972631514' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.272775+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.106:0/2972631514' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.273505+0000 mon.a (mon.0) 316 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.273505+0000 mon.a (mon.0) 316 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.277738+0000 mon.a (mon.0) 317 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]': finished 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.277738+0000 mon.a (mon.0) 317 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]': finished 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: cluster 2026-03-08T22:58:55.282490+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: cluster 2026-03-08T22:58:55.282490+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.283333+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:55 vm06 bash[27746]: audit 2026-03-08T22:58:55.283333+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:58:55.740 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.272775+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.106:0/2972631514' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.272775+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.106:0/2972631514' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.273505+0000 mon.a (mon.0) 316 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.273505+0000 mon.a (mon.0) 316 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.277738+0000 mon.a (mon.0) 317 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]': finished 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.277738+0000 mon.a (mon.0) 317 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]': finished 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: cluster 2026-03-08T22:58:55.282490+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: cluster 2026-03-08T22:58:55.282490+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.283333+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:58:55.741 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:55 vm06 bash[20625]: audit 2026-03-08T22:58:55.283333+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.272775+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.106:0/2972631514' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.272775+0000 mon.c (mon.2) 3 : audit [INF] from='client.? 192.168.123.106:0/2972631514' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.273505+0000 mon.a (mon.0) 316 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.273505+0000 mon.a (mon.0) 316 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]: dispatch 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.277738+0000 mon.a (mon.0) 317 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]': finished 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.277738+0000 mon.a (mon.0) 317 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2022422b-3e71-4162-b64b-3d25e2ad079e"}]': finished 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: cluster 2026-03-08T22:58:55.282490+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: cluster 2026-03-08T22:58:55.282490+0000 mon.a (mon.0) 318 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.283333+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:58:55.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:55 vm11 bash[23232]: audit 2026-03-08T22:58:55.283333+0000 mon.a (mon.0) 319 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:58:56.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:56 vm11 bash[23232]: cluster 2026-03-08T22:58:55.458945+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:56.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:56 vm11 bash[23232]: cluster 2026-03-08T22:58:55.458945+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:56.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:56 vm11 bash[23232]: audit 2026-03-08T22:58:55.884426+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.106:0/1604550862' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:56.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:56 vm11 bash[23232]: audit 2026-03-08T22:58:55.884426+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.106:0/1604550862' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:56 vm06 bash[20625]: cluster 2026-03-08T22:58:55.458945+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:56 vm06 bash[20625]: cluster 2026-03-08T22:58:55.458945+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:56 vm06 bash[20625]: audit 2026-03-08T22:58:55.884426+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.106:0/1604550862' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:56 vm06 bash[20625]: audit 2026-03-08T22:58:55.884426+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.106:0/1604550862' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:56 vm06 bash[27746]: cluster 2026-03-08T22:58:55.458945+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:56 vm06 bash[27746]: cluster 2026-03-08T22:58:55.458945+0000 mgr.y (mgr.14150) 88 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:56 vm06 bash[27746]: audit 2026-03-08T22:58:55.884426+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.106:0/1604550862' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:57.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:56 vm06 bash[27746]: audit 2026-03-08T22:58:55.884426+0000 mon.b (mon.1) 7 : audit [DBG] from='client.? 192.168.123.106:0/1604550862' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:58:59.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:58 vm06 bash[20625]: cluster 2026-03-08T22:58:57.459197+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:59.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:58:58 vm06 bash[20625]: cluster 2026-03-08T22:58:57.459197+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:59.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:58 vm06 bash[27746]: cluster 2026-03-08T22:58:57.459197+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:59.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:58:58 vm06 bash[27746]: cluster 2026-03-08T22:58:57.459197+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:59.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:58 vm11 bash[23232]: cluster 2026-03-08T22:58:57.459197+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:58:59.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:58:58 vm11 bash[23232]: cluster 2026-03-08T22:58:57.459197+0000 mgr.y (mgr.14150) 89 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:01.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:00 vm06 bash[20625]: cluster 2026-03-08T22:58:59.459484+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:01.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:00 vm06 bash[20625]: cluster 2026-03-08T22:58:59.459484+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:01.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:00 vm06 bash[27746]: cluster 2026-03-08T22:58:59.459484+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:01.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:00 vm06 bash[27746]: cluster 2026-03-08T22:58:59.459484+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:00 vm11 bash[23232]: cluster 2026-03-08T22:58:59.459484+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:00 vm11 bash[23232]: cluster 2026-03-08T22:58:59.459484+0000 mgr.y (mgr.14150) 90 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:02 vm11 bash[23232]: cluster 2026-03-08T22:59:01.459681+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:02 vm11 bash[23232]: cluster 2026-03-08T22:59:01.459681+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:03.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:02 vm06 bash[20625]: cluster 2026-03-08T22:59:01.459681+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:03.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:02 vm06 bash[20625]: cluster 2026-03-08T22:59:01.459681+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:03.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:02 vm06 bash[27746]: cluster 2026-03-08T22:59:01.459681+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:03.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:02 vm06 bash[27746]: cluster 2026-03-08T22:59:01.459681+0000 mgr.y (mgr.14150) 91 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 bash[27746]: cluster 2026-03-08T22:59:03.459964+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 bash[27746]: cluster 2026-03-08T22:59:03.459964+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 bash[27746]: audit 2026-03-08T22:59:04.907403+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 bash[27746]: audit 2026-03-08T22:59:04.907403+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 bash[27746]: audit 2026-03-08T22:59:04.907899+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 bash[27746]: audit 2026-03-08T22:59:04.907899+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 bash[20625]: cluster 2026-03-08T22:59:03.459964+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 bash[20625]: cluster 2026-03-08T22:59:03.459964+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 bash[20625]: audit 2026-03-08T22:59:04.907403+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 bash[20625]: audit 2026-03-08T22:59:04.907403+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 bash[20625]: audit 2026-03-08T22:59:04.907899+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:05.161 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 bash[20625]: audit 2026-03-08T22:59:04.907899+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:05 vm11 bash[23232]: cluster 2026-03-08T22:59:03.459964+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:05 vm11 bash[23232]: cluster 2026-03-08T22:59:03.459964+0000 mgr.y (mgr.14150) 92 : cluster [DBG] pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:05 vm11 bash[23232]: audit 2026-03-08T22:59:04.907403+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T22:59:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:05 vm11 bash[23232]: audit 2026-03-08T22:59:04.907403+0000 mon.a (mon.0) 320 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-08T22:59:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:05 vm11 bash[23232]: audit 2026-03-08T22:59:04.907899+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:05 vm11 bash[23232]: audit 2026-03-08T22:59:04.907899+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:05 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:06 vm06 bash[20625]: cephadm 2026-03-08T22:59:04.908293+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:06 vm06 bash[20625]: cephadm 2026-03-08T22:59:04.908293+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:06 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:59:05 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:59:06 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:05 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:06 vm06 bash[27746]: cephadm 2026-03-08T22:59:04.908293+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:06 vm06 bash[27746]: cephadm 2026-03-08T22:59:04.908293+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-08T22:59:06.087 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:06 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 22:59:05 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.087 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 22:59:06 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:06.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:06 vm11 bash[23232]: cephadm 2026-03-08T22:59:04.908293+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-08T22:59:06.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:06 vm11 bash[23232]: cephadm 2026-03-08T22:59:04.908293+0000 mgr.y (mgr.14150) 93 : cephadm [INF] Deploying daemon osd.1 on vm06 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: cluster 2026-03-08T22:59:05.460249+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: cluster 2026-03-08T22:59:05.460249+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: audit 2026-03-08T22:59:06.118252+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: audit 2026-03-08T22:59:06.118252+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: audit 2026-03-08T22:59:06.123270+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: audit 2026-03-08T22:59:06.123270+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: audit 2026-03-08T22:59:06.133300+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:07 vm06 bash[20625]: audit 2026-03-08T22:59:06.133300+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: cluster 2026-03-08T22:59:05.460249+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: cluster 2026-03-08T22:59:05.460249+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: audit 2026-03-08T22:59:06.118252+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: audit 2026-03-08T22:59:06.118252+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: audit 2026-03-08T22:59:06.123270+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: audit 2026-03-08T22:59:06.123270+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: audit 2026-03-08T22:59:06.133300+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:07 vm06 bash[27746]: audit 2026-03-08T22:59:06.133300+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: cluster 2026-03-08T22:59:05.460249+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:07.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: cluster 2026-03-08T22:59:05.460249+0000 mgr.y (mgr.14150) 94 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:07.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: audit 2026-03-08T22:59:06.118252+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:07.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: audit 2026-03-08T22:59:06.118252+0000 mon.a (mon.0) 322 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:07.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: audit 2026-03-08T22:59:06.123270+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: audit 2026-03-08T22:59:06.123270+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: audit 2026-03-08T22:59:06.133300+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:07.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:07 vm11 bash[23232]: audit 2026-03-08T22:59:06.133300+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:09 vm06 bash[20625]: cluster 2026-03-08T22:59:07.460461+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:09 vm06 bash[20625]: cluster 2026-03-08T22:59:07.460461+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:09.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:09 vm06 bash[27746]: cluster 2026-03-08T22:59:07.460461+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:09.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:09 vm06 bash[27746]: cluster 2026-03-08T22:59:07.460461+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:09.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:09 vm11 bash[23232]: cluster 2026-03-08T22:59:07.460461+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:09.306 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:09 vm11 bash[23232]: cluster 2026-03-08T22:59:07.460461+0000 mgr.y (mgr.14150) 95 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:10.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:10 vm06 bash[20625]: audit 2026-03-08T22:59:09.475826+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T22:59:10.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:10 vm06 bash[20625]: audit 2026-03-08T22:59:09.475826+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T22:59:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:10 vm06 bash[27746]: audit 2026-03-08T22:59:09.475826+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T22:59:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:10 vm06 bash[27746]: audit 2026-03-08T22:59:09.475826+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T22:59:10.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:10 vm11 bash[23232]: audit 2026-03-08T22:59:09.475826+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T22:59:10.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:10 vm11 bash[23232]: audit 2026-03-08T22:59:09.475826+0000 mon.a (mon.0) 325 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: cluster 2026-03-08T22:59:09.460689+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: cluster 2026-03-08T22:59:09.460689+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: audit 2026-03-08T22:59:10.030708+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: audit 2026-03-08T22:59:10.030708+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: cluster 2026-03-08T22:59:10.032979+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: cluster 2026-03-08T22:59:10.032979+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: audit 2026-03-08T22:59:10.033125+0000 mon.a (mon.0) 328 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: audit 2026-03-08T22:59:10.033125+0000 mon.a (mon.0) 328 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: audit 2026-03-08T22:59:10.034549+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:11 vm11 bash[23232]: audit 2026-03-08T22:59:10.034549+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: cluster 2026-03-08T22:59:09.460689+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: cluster 2026-03-08T22:59:09.460689+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: audit 2026-03-08T22:59:10.030708+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: audit 2026-03-08T22:59:10.030708+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: cluster 2026-03-08T22:59:10.032979+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: cluster 2026-03-08T22:59:10.032979+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: audit 2026-03-08T22:59:10.033125+0000 mon.a (mon.0) 328 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: audit 2026-03-08T22:59:10.033125+0000 mon.a (mon.0) 328 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:11.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: audit 2026-03-08T22:59:10.034549+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:11 vm06 bash[20625]: audit 2026-03-08T22:59:10.034549+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: cluster 2026-03-08T22:59:09.460689+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: cluster 2026-03-08T22:59:09.460689+0000 mgr.y (mgr.14150) 96 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: audit 2026-03-08T22:59:10.030708+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: audit 2026-03-08T22:59:10.030708+0000 mon.a (mon.0) 326 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: cluster 2026-03-08T22:59:10.032979+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: cluster 2026-03-08T22:59:10.032979+0000 mon.a (mon.0) 327 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: audit 2026-03-08T22:59:10.033125+0000 mon.a (mon.0) 328 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: audit 2026-03-08T22:59:10.033125+0000 mon.a (mon.0) 328 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: audit 2026-03-08T22:59:10.034549+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:11.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:11 vm06 bash[27746]: audit 2026-03-08T22:59:10.034549+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.360 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:11.035006+0000 mon.a (mon.0) 330 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:11.035006+0000 mon.a (mon.0) 330 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: cluster 2026-03-08T22:59:11.040071+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: cluster 2026-03-08T22:59:11.040071+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:11.041128+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:11.041128+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:11.049949+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:11.049949+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:12.043304+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:12 vm06 bash[20625]: audit 2026-03-08T22:59:12.043304+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:11.035006+0000 mon.a (mon.0) 330 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:11.035006+0000 mon.a (mon.0) 330 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: cluster 2026-03-08T22:59:11.040071+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: cluster 2026-03-08T22:59:11.040071+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:11.041128+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:11.041128+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:11.049949+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:11.049949+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:12.043304+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.361 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:12 vm06 bash[27746]: audit 2026-03-08T22:59:12.043304+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:11.035006+0000 mon.a (mon.0) 330 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:11.035006+0000 mon.a (mon.0) 330 : audit [INF] from='osd.1 v2:192.168.123.106:6805/2598119140' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: cluster 2026-03-08T22:59:11.040071+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: cluster 2026-03-08T22:59:11.040071+0000 mon.a (mon.0) 331 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:11.041128+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:11.041128+0000 mon.a (mon.0) 332 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:11.049949+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:11.049949+0000 mon.a (mon.0) 333 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:12.043304+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:12.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:12 vm11 bash[23232]: audit 2026-03-08T22:59:12.043304+0000 mon.a (mon.0) 334 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:10.436481+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:10.436481+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:10.436549+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:10.436549+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:11.460980+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:11.460980+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:12.074363+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 v2:192.168.123.106:6805/2598119140 boot 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:12.074363+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 v2:192.168.123.106:6805/2598119140 boot 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:12.074499+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: cluster 2026-03-08T22:59:12.074499+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.083040+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.083040+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.272779+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.272779+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.294125+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.294125+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.295202+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.295202+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.296592+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.296592+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.311155+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:13 vm06 bash[20625]: audit 2026-03-08T22:59:12.311155+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:10.436481+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:10.436481+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:10.436549+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:10.436549+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:11.460980+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:11.460980+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:12.074363+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 v2:192.168.123.106:6805/2598119140 boot 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:12.074363+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 v2:192.168.123.106:6805/2598119140 boot 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:12.074499+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: cluster 2026-03-08T22:59:12.074499+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.083040+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.083040+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.272779+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.272779+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.294125+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.294125+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.295202+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.295202+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.296592+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.296592+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.311155+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:13 vm06 bash[27746]: audit 2026-03-08T22:59:12.311155+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.393 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 1 on host 'vm06' 2026-03-08T22:59:13.474 DEBUG:teuthology.orchestra.run.vm06:osd.1> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.1.service 2026-03-08T22:59:13.475 INFO:tasks.cephadm:Deploying osd.2 on vm06 with /dev/vdc... 2026-03-08T22:59:13.475 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vdc 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:10.436481+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:10.436481+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:10.436549+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:10.436549+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:11.460980+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:11.460980+0000 mgr.y (mgr.14150) 97 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:12.074363+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 v2:192.168.123.106:6805/2598119140 boot 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:12.074363+0000 mon.a (mon.0) 335 : cluster [INF] osd.1 v2:192.168.123.106:6805/2598119140 boot 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:12.074499+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: cluster 2026-03-08T22:59:12.074499+0000 mon.a (mon.0) 336 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.083040+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.083040+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.272779+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.272779+0000 mon.a (mon.0) 338 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.294125+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.294125+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.295202+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.295202+0000 mon.a (mon.0) 340 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.296592+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.296592+0000 mon.a (mon.0) 341 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.311155+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:13 vm11 bash[23232]: audit 2026-03-08T22:59:12.311155+0000 mon.a (mon.0) 342 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: cluster 2026-03-08T22:59:13.314495+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: cluster 2026-03-08T22:59:13.314495+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: audit 2026-03-08T22:59:13.379004+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: audit 2026-03-08T22:59:13.379004+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: audit 2026-03-08T22:59:13.384180+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: audit 2026-03-08T22:59:13.384180+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: audit 2026-03-08T22:59:13.391088+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: audit 2026-03-08T22:59:13.391088+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: cluster 2026-03-08T22:59:13.461229+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:14 vm06 bash[20625]: cluster 2026-03-08T22:59:13.461229+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: cluster 2026-03-08T22:59:13.314495+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: cluster 2026-03-08T22:59:13.314495+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: audit 2026-03-08T22:59:13.379004+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: audit 2026-03-08T22:59:13.379004+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: audit 2026-03-08T22:59:13.384180+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: audit 2026-03-08T22:59:13.384180+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: audit 2026-03-08T22:59:13.391088+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: audit 2026-03-08T22:59:13.391088+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: cluster 2026-03-08T22:59:13.461229+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:14.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:14 vm06 bash[27746]: cluster 2026-03-08T22:59:13.461229+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: cluster 2026-03-08T22:59:13.314495+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: cluster 2026-03-08T22:59:13.314495+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: audit 2026-03-08T22:59:13.379004+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: audit 2026-03-08T22:59:13.379004+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: audit 2026-03-08T22:59:13.384180+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: audit 2026-03-08T22:59:13.384180+0000 mon.a (mon.0) 345 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: audit 2026-03-08T22:59:13.391088+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: audit 2026-03-08T22:59:13.391088+0000 mon.a (mon.0) 346 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: cluster 2026-03-08T22:59:13.461229+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:14 vm11 bash[23232]: cluster 2026-03-08T22:59:13.461229+0000 mgr.y (mgr.14150) 98 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:16.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:16 vm06 bash[20625]: cluster 2026-03-08T22:59:15.461450+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:16.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:16 vm06 bash[20625]: cluster 2026-03-08T22:59:15.461450+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:16.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:16 vm06 bash[27746]: cluster 2026-03-08T22:59:15.461450+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:16.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:16 vm06 bash[27746]: cluster 2026-03-08T22:59:15.461450+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:16.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:16 vm11 bash[23232]: cluster 2026-03-08T22:59:15.461450+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:16.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:16 vm11 bash[23232]: cluster 2026-03-08T22:59:15.461450+0000 mgr.y (mgr.14150) 99 : cluster [DBG] pgmap v66: 0 pgs: ; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:18.137 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:59:18.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:18 vm11 bash[23232]: cluster 2026-03-08T22:59:17.461759+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:18.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:18 vm11 bash[23232]: cluster 2026-03-08T22:59:17.461759+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:18.929 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:18 vm06 bash[20625]: cluster 2026-03-08T22:59:17.461759+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:18.929 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:18 vm06 bash[20625]: cluster 2026-03-08T22:59:17.461759+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:18.929 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:18 vm06 bash[27746]: cluster 2026-03-08T22:59:17.461759+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:18.929 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:18 vm06 bash[27746]: cluster 2026-03-08T22:59:17.461759+0000 mgr.y (mgr.14150) 100 : cluster [DBG] pgmap v67: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:19.769 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:59:19.783 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm06:/dev/vdc 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: cephadm 2026-03-08T22:59:18.992535+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: cephadm 2026-03-08T22:59:18.992535+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.009898+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.009898+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.022132+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.022132+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.023209+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.023209+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.023938+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.023938+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.024355+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.024355+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.034982+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:20 vm06 bash[20625]: audit 2026-03-08T22:59:19.034982+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: cephadm 2026-03-08T22:59:18.992535+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: cephadm 2026-03-08T22:59:18.992535+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.009898+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.009898+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.022132+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.022132+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.023209+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.023209+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.023938+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.023938+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.024355+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.024355+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.034982+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:20 vm06 bash[27746]: audit 2026-03-08T22:59:19.034982+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: cephadm 2026-03-08T22:59:18.992535+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: cephadm 2026-03-08T22:59:18.992535+0000 mgr.y (mgr.14150) 101 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.009898+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.009898+0000 mon.a (mon.0) 347 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.022132+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.022132+0000 mon.a (mon.0) 348 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.023209+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.023209+0000 mon.a (mon.0) 349 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.023938+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.023938+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.024355+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.024355+0000 mon.a (mon.0) 351 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.034982+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:20.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:20 vm11 bash[23232]: audit 2026-03-08T22:59:19.034982+0000 mon.a (mon.0) 352 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:21 vm11 bash[23232]: cluster 2026-03-08T22:59:19.462092+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:21 vm11 bash[23232]: cluster 2026-03-08T22:59:19.462092+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:21 vm06 bash[20625]: cluster 2026-03-08T22:59:19.462092+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:21 vm06 bash[20625]: cluster 2026-03-08T22:59:19.462092+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:21.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:21 vm06 bash[27746]: cluster 2026-03-08T22:59:19.462092+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:21.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:21 vm06 bash[27746]: cluster 2026-03-08T22:59:19.462092+0000 mgr.y (mgr.14150) 102 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:23 vm11 bash[23232]: cluster 2026-03-08T22:59:21.462379+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:23 vm11 bash[23232]: cluster 2026-03-08T22:59:21.462379+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:23.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:23 vm06 bash[20625]: cluster 2026-03-08T22:59:21.462379+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:23.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:23 vm06 bash[20625]: cluster 2026-03-08T22:59:21.462379+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:23.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:23 vm06 bash[27746]: cluster 2026-03-08T22:59:21.462379+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:23.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:23 vm06 bash[27746]: cluster 2026-03-08T22:59:21.462379+0000 mgr.y (mgr.14150) 103 : cluster [DBG] pgmap v69: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:24.395 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: cluster 2026-03-08T22:59:23.462668+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: cluster 2026-03-08T22:59:23.462668+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: audit 2026-03-08T22:59:24.748385+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: audit 2026-03-08T22:59:24.748385+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: audit 2026-03-08T22:59:24.749545+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: audit 2026-03-08T22:59:24.749545+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: audit 2026-03-08T22:59:24.749888+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:25 vm11 bash[23232]: audit 2026-03-08T22:59:24.749888+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: cluster 2026-03-08T22:59:23.462668+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: cluster 2026-03-08T22:59:23.462668+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: audit 2026-03-08T22:59:24.748385+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: audit 2026-03-08T22:59:24.748385+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: audit 2026-03-08T22:59:24.749545+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: audit 2026-03-08T22:59:24.749545+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: audit 2026-03-08T22:59:24.749888+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:25 vm06 bash[20625]: audit 2026-03-08T22:59:24.749888+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: cluster 2026-03-08T22:59:23.462668+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: cluster 2026-03-08T22:59:23.462668+0000 mgr.y (mgr.14150) 104 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: audit 2026-03-08T22:59:24.748385+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: audit 2026-03-08T22:59:24.748385+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: audit 2026-03-08T22:59:24.749545+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: audit 2026-03-08T22:59:24.749545+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: audit 2026-03-08T22:59:24.749888+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:25.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:25 vm06 bash[27746]: audit 2026-03-08T22:59:24.749888+0000 mon.a (mon.0) 355 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:26.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:26 vm06 bash[20625]: audit 2026-03-08T22:59:24.747148+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:26.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:26 vm06 bash[20625]: audit 2026-03-08T22:59:24.747148+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:26.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:26 vm06 bash[27746]: audit 2026-03-08T22:59:24.747148+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:26.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:26 vm06 bash[27746]: audit 2026-03-08T22:59:24.747148+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:26.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:26 vm11 bash[23232]: audit 2026-03-08T22:59:24.747148+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:26.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:26 vm11 bash[23232]: audit 2026-03-08T22:59:24.747148+0000 mgr.y (mgr.14150) 105 : audit [DBG] from='client.24157 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:27.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:27 vm06 bash[20625]: cluster 2026-03-08T22:59:25.462960+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:27.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:27 vm06 bash[20625]: cluster 2026-03-08T22:59:25.462960+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:27.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:27 vm06 bash[27746]: cluster 2026-03-08T22:59:25.462960+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:27.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:27 vm06 bash[27746]: cluster 2026-03-08T22:59:25.462960+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:27.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:27 vm11 bash[23232]: cluster 2026-03-08T22:59:25.462960+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:27.556 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:27 vm11 bash[23232]: cluster 2026-03-08T22:59:25.462960+0000 mgr.y (mgr.14150) 106 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:29 vm06 bash[20625]: cluster 2026-03-08T22:59:27.463202+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:29 vm06 bash[20625]: cluster 2026-03-08T22:59:27.463202+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:29.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:29 vm06 bash[27746]: cluster 2026-03-08T22:59:27.463202+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:29.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:29 vm06 bash[27746]: cluster 2026-03-08T22:59:27.463202+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:29 vm11 bash[23232]: cluster 2026-03-08T22:59:27.463202+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:29 vm11 bash[23232]: cluster 2026-03-08T22:59:27.463202+0000 mgr.y (mgr.14150) 107 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: cluster 2026-03-08T22:59:29.463451+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: cluster 2026-03-08T22:59:29.463451+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.144114+0000 mon.a (mon.0) 356 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]: dispatch 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.144114+0000 mon.a (mon.0) 356 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]: dispatch 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.147103+0000 mon.a (mon.0) 357 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]': finished 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.147103+0000 mon.a (mon.0) 357 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]': finished 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: cluster 2026-03-08T22:59:30.150383+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: cluster 2026-03-08T22:59:30.150383+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.151193+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.151193+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.728463+0000 mon.a (mon.0) 360 : audit [DBG] from='client.? 192.168.123.106:0/4160567109' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:59:31.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:31 vm06 bash[20625]: audit 2026-03-08T22:59:30.728463+0000 mon.a (mon.0) 360 : audit [DBG] from='client.? 192.168.123.106:0/4160567109' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: cluster 2026-03-08T22:59:29.463451+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: cluster 2026-03-08T22:59:29.463451+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.144114+0000 mon.a (mon.0) 356 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]: dispatch 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.144114+0000 mon.a (mon.0) 356 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]: dispatch 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.147103+0000 mon.a (mon.0) 357 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]': finished 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.147103+0000 mon.a (mon.0) 357 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]': finished 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: cluster 2026-03-08T22:59:30.150383+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: cluster 2026-03-08T22:59:30.150383+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.151193+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.151193+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.728463+0000 mon.a (mon.0) 360 : audit [DBG] from='client.? 192.168.123.106:0/4160567109' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:59:31.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:31 vm06 bash[27746]: audit 2026-03-08T22:59:30.728463+0000 mon.a (mon.0) 360 : audit [DBG] from='client.? 192.168.123.106:0/4160567109' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: cluster 2026-03-08T22:59:29.463451+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: cluster 2026-03-08T22:59:29.463451+0000 mgr.y (mgr.14150) 108 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.144114+0000 mon.a (mon.0) 356 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]: dispatch 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.144114+0000 mon.a (mon.0) 356 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]: dispatch 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.147103+0000 mon.a (mon.0) 357 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]': finished 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.147103+0000 mon.a (mon.0) 357 : audit [INF] from='client.? 192.168.123.106:0/351350462' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "127338cf-5856-4d11-8a9b-9cbd216d8507"}]': finished 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: cluster 2026-03-08T22:59:30.150383+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: cluster 2026-03-08T22:59:30.150383+0000 mon.a (mon.0) 358 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.151193+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.151193+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.728463+0000 mon.a (mon.0) 360 : audit [DBG] from='client.? 192.168.123.106:0/4160567109' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:59:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:31 vm11 bash[23232]: audit 2026-03-08T22:59:30.728463+0000 mon.a (mon.0) 360 : audit [DBG] from='client.? 192.168.123.106:0/4160567109' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T22:59:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:33 vm06 bash[20625]: cluster 2026-03-08T22:59:31.463702+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:33 vm06 bash[20625]: cluster 2026-03-08T22:59:31.463702+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:33 vm06 bash[27746]: cluster 2026-03-08T22:59:31.463702+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:33 vm06 bash[27746]: cluster 2026-03-08T22:59:31.463702+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:33.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:33 vm11 bash[23232]: cluster 2026-03-08T22:59:31.463702+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:33.566 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:33 vm11 bash[23232]: cluster 2026-03-08T22:59:31.463702+0000 mgr.y (mgr.14150) 109 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:35.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:35 vm06 bash[20625]: cluster 2026-03-08T22:59:33.463955+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:35.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:35 vm06 bash[20625]: cluster 2026-03-08T22:59:33.463955+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:35.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:35 vm06 bash[27746]: cluster 2026-03-08T22:59:33.463955+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:35.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:35 vm06 bash[27746]: cluster 2026-03-08T22:59:33.463955+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:35.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:35 vm11 bash[23232]: cluster 2026-03-08T22:59:33.463955+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:35.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:35 vm11 bash[23232]: cluster 2026-03-08T22:59:33.463955+0000 mgr.y (mgr.14150) 110 : cluster [DBG] pgmap v76: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:37.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:37 vm06 bash[20625]: cluster 2026-03-08T22:59:35.464279+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:37.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:37 vm06 bash[20625]: cluster 2026-03-08T22:59:35.464279+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:37.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:37 vm06 bash[27746]: cluster 2026-03-08T22:59:35.464279+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:37.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:37 vm06 bash[27746]: cluster 2026-03-08T22:59:35.464279+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:37.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:37 vm11 bash[23232]: cluster 2026-03-08T22:59:35.464279+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:37.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:37 vm11 bash[23232]: cluster 2026-03-08T22:59:35.464279+0000 mgr.y (mgr.14150) 111 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.378 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:39 vm06 bash[27746]: cluster 2026-03-08T22:59:37.464572+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.378 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:39 vm06 bash[27746]: cluster 2026-03-08T22:59:37.464572+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.378 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:39 vm06 bash[20625]: cluster 2026-03-08T22:59:37.464572+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.378 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:39 vm06 bash[20625]: cluster 2026-03-08T22:59:37.464572+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:39 vm11 bash[23232]: cluster 2026-03-08T22:59:37.464572+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:39 vm11 bash[23232]: cluster 2026-03-08T22:59:37.464572+0000 mgr.y (mgr.14150) 112 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:39.954 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:59:39 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:39.955 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:39 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:39.955 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 22:59:39 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:39.955 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 22:59:39 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:39.955 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:39 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:40.266 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:40.266 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 bash[27746]: audit 2026-03-08T22:59:39.116279+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T22:59:40.266 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 bash[27746]: audit 2026-03-08T22:59:39.116279+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 bash[27746]: audit 2026-03-08T22:59:39.116777+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 bash[27746]: audit 2026-03-08T22:59:39.116777+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 bash[27746]: cephadm 2026-03-08T22:59:39.117218+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:40 vm06 bash[27746]: cephadm 2026-03-08T22:59:39.117218+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-08T22:59:40.267 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 22:59:40 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 22:59:40 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:40.267 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 22:59:40 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 bash[20625]: audit 2026-03-08T22:59:39.116279+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 bash[20625]: audit 2026-03-08T22:59:39.116279+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 bash[20625]: audit 2026-03-08T22:59:39.116777+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 bash[20625]: audit 2026-03-08T22:59:39.116777+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 bash[20625]: cephadm 2026-03-08T22:59:39.117218+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-08T22:59:40.267 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:40 vm06 bash[20625]: cephadm 2026-03-08T22:59:39.117218+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-08T22:59:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:40 vm11 bash[23232]: audit 2026-03-08T22:59:39.116279+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T22:59:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:40 vm11 bash[23232]: audit 2026-03-08T22:59:39.116279+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-08T22:59:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:40 vm11 bash[23232]: audit 2026-03-08T22:59:39.116777+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:40 vm11 bash[23232]: audit 2026-03-08T22:59:39.116777+0000 mon.a (mon.0) 362 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:40 vm11 bash[23232]: cephadm 2026-03-08T22:59:39.117218+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-08T22:59:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:40 vm11 bash[23232]: cephadm 2026-03-08T22:59:39.117218+0000 mgr.y (mgr.14150) 113 : cephadm [INF] Deploying daemon osd.2 on vm06 2026-03-08T22:59:41.396 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: cluster 2026-03-08T22:59:39.464819+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:41.396 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: cluster 2026-03-08T22:59:39.464819+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: audit 2026-03-08T22:59:40.216638+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: audit 2026-03-08T22:59:40.216638+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: audit 2026-03-08T22:59:40.223519+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: audit 2026-03-08T22:59:40.223519+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: audit 2026-03-08T22:59:40.231577+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:41 vm06 bash[20625]: audit 2026-03-08T22:59:40.231577+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: cluster 2026-03-08T22:59:39.464819+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: cluster 2026-03-08T22:59:39.464819+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: audit 2026-03-08T22:59:40.216638+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: audit 2026-03-08T22:59:40.216638+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: audit 2026-03-08T22:59:40.223519+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: audit 2026-03-08T22:59:40.223519+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: audit 2026-03-08T22:59:40.231577+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.397 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:41 vm06 bash[27746]: audit 2026-03-08T22:59:40.231577+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: cluster 2026-03-08T22:59:39.464819+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: cluster 2026-03-08T22:59:39.464819+0000 mgr.y (mgr.14150) 114 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: audit 2026-03-08T22:59:40.216638+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: audit 2026-03-08T22:59:40.216638+0000 mon.a (mon.0) 363 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: audit 2026-03-08T22:59:40.223519+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: audit 2026-03-08T22:59:40.223519+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: audit 2026-03-08T22:59:40.231577+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:41 vm11 bash[23232]: audit 2026-03-08T22:59:40.231577+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:43.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:43 vm06 bash[20625]: cluster 2026-03-08T22:59:41.465124+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:43.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:43 vm06 bash[20625]: cluster 2026-03-08T22:59:41.465124+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:43.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:43 vm06 bash[27746]: cluster 2026-03-08T22:59:41.465124+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:43.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:43 vm06 bash[27746]: cluster 2026-03-08T22:59:41.465124+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:43 vm11 bash[23232]: cluster 2026-03-08T22:59:41.465124+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:43 vm11 bash[23232]: cluster 2026-03-08T22:59:41.465124+0000 mgr.y (mgr.14150) 115 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:44 vm06 bash[20625]: audit 2026-03-08T22:59:43.578052+0000 mon.c (mon.2) 4 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:44 vm06 bash[20625]: audit 2026-03-08T22:59:43.578052+0000 mon.c (mon.2) 4 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:44 vm06 bash[20625]: audit 2026-03-08T22:59:43.578271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:44 vm06 bash[20625]: audit 2026-03-08T22:59:43.578271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:44 vm06 bash[27746]: audit 2026-03-08T22:59:43.578052+0000 mon.c (mon.2) 4 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:44 vm06 bash[27746]: audit 2026-03-08T22:59:43.578052+0000 mon.c (mon.2) 4 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:44 vm06 bash[27746]: audit 2026-03-08T22:59:43.578271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:44 vm06 bash[27746]: audit 2026-03-08T22:59:43.578271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:44 vm11 bash[23232]: audit 2026-03-08T22:59:43.578052+0000 mon.c (mon.2) 4 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:44 vm11 bash[23232]: audit 2026-03-08T22:59:43.578052+0000 mon.c (mon.2) 4 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:44 vm11 bash[23232]: audit 2026-03-08T22:59:43.578271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:44 vm11 bash[23232]: audit 2026-03-08T22:59:43.578271+0000 mon.a (mon.0) 366 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: cluster 2026-03-08T22:59:43.465401+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: cluster 2026-03-08T22:59:43.465401+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.157360+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.157360+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: cluster 2026-03-08T22:59:44.167950+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: cluster 2026-03-08T22:59:44.167950+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.168110+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.168110+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.168531+0000 mon.c (mon.2) 5 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.168531+0000 mon.c (mon.2) 5 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.168782+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:45 vm06 bash[20625]: audit 2026-03-08T22:59:44.168782+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: cluster 2026-03-08T22:59:43.465401+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: cluster 2026-03-08T22:59:43.465401+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.157360+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.157360+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: cluster 2026-03-08T22:59:44.167950+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: cluster 2026-03-08T22:59:44.167950+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.168110+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.168110+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.168531+0000 mon.c (mon.2) 5 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.168531+0000 mon.c (mon.2) 5 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.168782+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:45 vm06 bash[27746]: audit 2026-03-08T22:59:44.168782+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: cluster 2026-03-08T22:59:43.465401+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: cluster 2026-03-08T22:59:43.465401+0000 mgr.y (mgr.14150) 116 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.157360+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.157360+0000 mon.a (mon.0) 367 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: cluster 2026-03-08T22:59:44.167950+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: cluster 2026-03-08T22:59:44.167950+0000 mon.a (mon.0) 368 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.168110+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.168110+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.168531+0000 mon.c (mon.2) 5 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.168531+0000 mon.c (mon.2) 5 : audit [INF] from='osd.2 v2:192.168.123.106:6809/2508962009' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.168782+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:45 vm11 bash[23232]: audit 2026-03-08T22:59:44.168782+0000 mon.a (mon.0) 370 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: audit 2026-03-08T22:59:45.169051+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: audit 2026-03-08T22:59:45.169051+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: cluster 2026-03-08T22:59:45.177924+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: cluster 2026-03-08T22:59:45.177924+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: audit 2026-03-08T22:59:45.192649+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: audit 2026-03-08T22:59:45.192649+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: audit 2026-03-08T22:59:46.176361+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:46 vm06 bash[20625]: audit 2026-03-08T22:59:46.176361+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: audit 2026-03-08T22:59:45.169051+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: audit 2026-03-08T22:59:45.169051+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: cluster 2026-03-08T22:59:45.177924+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: cluster 2026-03-08T22:59:45.177924+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: audit 2026-03-08T22:59:45.192649+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: audit 2026-03-08T22:59:45.192649+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: audit 2026-03-08T22:59:46.176361+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.470 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:46 vm06 bash[27746]: audit 2026-03-08T22:59:46.176361+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: audit 2026-03-08T22:59:45.169051+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: audit 2026-03-08T22:59:45.169051+0000 mon.a (mon.0) 371 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: cluster 2026-03-08T22:59:45.177924+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: cluster 2026-03-08T22:59:45.177924+0000 mon.a (mon.0) 372 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: audit 2026-03-08T22:59:45.192649+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: audit 2026-03-08T22:59:45.192649+0000 mon.a (mon.0) 373 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: audit 2026-03-08T22:59:46.176361+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:46 vm11 bash[23232]: audit 2026-03-08T22:59:46.176361+0000 mon.a (mon.0) 374 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: cluster 2026-03-08T22:59:44.575035+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: cluster 2026-03-08T22:59:44.575035+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: cluster 2026-03-08T22:59:44.575085+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: cluster 2026-03-08T22:59:44.575085+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: cluster 2026-03-08T22:59:45.465712+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: cluster 2026-03-08T22:59:45.465712+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.516432+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.516432+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.522078+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.522078+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.624116+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.624116+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.896473+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.896473+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:47.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.897179+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.897179+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.902312+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:46.902312+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:47.176444+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:47 vm06 bash[20625]: audit 2026-03-08T22:59:47.176444+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: cluster 2026-03-08T22:59:44.575035+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: cluster 2026-03-08T22:59:44.575035+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: cluster 2026-03-08T22:59:44.575085+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: cluster 2026-03-08T22:59:44.575085+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: cluster 2026-03-08T22:59:45.465712+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: cluster 2026-03-08T22:59:45.465712+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.516432+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.516432+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.522078+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.522078+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.624116+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.624116+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.896473+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.896473+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.897179+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.897179+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.902312+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:46.902312+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:47.176444+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:47 vm06 bash[27746]: audit 2026-03-08T22:59:47.176444+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: cluster 2026-03-08T22:59:44.575035+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: cluster 2026-03-08T22:59:44.575035+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: cluster 2026-03-08T22:59:44.575085+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: cluster 2026-03-08T22:59:44.575085+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: cluster 2026-03-08T22:59:45.465712+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: cluster 2026-03-08T22:59:45.465712+0000 mgr.y (mgr.14150) 117 : cluster [DBG] pgmap v84: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.516432+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.516432+0000 mon.a (mon.0) 375 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.522078+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.522078+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.624116+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.624116+0000 mon.a (mon.0) 377 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.896473+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.896473+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.897179+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.897179+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.902312+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:46.902312+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:47.176444+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:47 vm11 bash[23232]: audit 2026-03-08T22:59:47.176444+0000 mon.a (mon.0) 381 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:47.748 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 2 on host 'vm06' 2026-03-08T22:59:47.834 DEBUG:teuthology.orchestra.run.vm06:osd.2> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.2.service 2026-03-08T22:59:47.835 INFO:tasks.cephadm:Deploying osd.3 on vm06 with /dev/vdb... 2026-03-08T22:59:47.835 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vdb 2026-03-08T22:59:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: cluster 2026-03-08T22:59:47.465937+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: cluster 2026-03-08T22:59:47.465937+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: cluster 2026-03-08T22:59:47.629630+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.106:6809/2508962009 boot 2026-03-08T22:59:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: cluster 2026-03-08T22:59:47.629630+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.106:6809/2508962009 boot 2026-03-08T22:59:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: cluster 2026-03-08T22:59:47.629657+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T22:59:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: cluster 2026-03-08T22:59:47.629657+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.629725+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.629725+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.728503+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.728503+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.735333+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.735333+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.742605+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:48 vm06 bash[20625]: audit 2026-03-08T22:59:47.742605+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: cluster 2026-03-08T22:59:47.465937+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: cluster 2026-03-08T22:59:47.465937+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: cluster 2026-03-08T22:59:47.629630+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.106:6809/2508962009 boot 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: cluster 2026-03-08T22:59:47.629630+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.106:6809/2508962009 boot 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: cluster 2026-03-08T22:59:47.629657+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: cluster 2026-03-08T22:59:47.629657+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.629725+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.629725+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.728503+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.728503+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.735333+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.735333+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.742605+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:48 vm06 bash[27746]: audit 2026-03-08T22:59:47.742605+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: cluster 2026-03-08T22:59:47.465937+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: cluster 2026-03-08T22:59:47.465937+0000 mgr.y (mgr.14150) 118 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: cluster 2026-03-08T22:59:47.629630+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.106:6809/2508962009 boot 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: cluster 2026-03-08T22:59:47.629630+0000 mon.a (mon.0) 382 : cluster [INF] osd.2 v2:192.168.123.106:6809/2508962009 boot 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: cluster 2026-03-08T22:59:47.629657+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: cluster 2026-03-08T22:59:47.629657+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.629725+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.629725+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.728503+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.728503+0000 mon.a (mon.0) 385 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.735333+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.735333+0000 mon.a (mon.0) 386 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.742605+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:48 vm11 bash[23232]: audit 2026-03-08T22:59:47.742605+0000 mon.a (mon.0) 387 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:49 vm06 bash[20625]: cluster 2026-03-08T22:59:48.653810+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:49 vm06 bash[20625]: cluster 2026-03-08T22:59:48.653810+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:49 vm06 bash[20625]: audit 2026-03-08T22:59:49.507585+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:49 vm06 bash[20625]: audit 2026-03-08T22:59:49.507585+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:49 vm06 bash[27746]: cluster 2026-03-08T22:59:48.653810+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:49 vm06 bash[27746]: cluster 2026-03-08T22:59:48.653810+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T22:59:50.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:49 vm06 bash[27746]: audit 2026-03-08T22:59:49.507585+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:50.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:49 vm06 bash[27746]: audit 2026-03-08T22:59:49.507585+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:49 vm11 bash[23232]: cluster 2026-03-08T22:59:48.653810+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T22:59:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:49 vm11 bash[23232]: cluster 2026-03-08T22:59:48.653810+0000 mon.a (mon.0) 388 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-08T22:59:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:49 vm11 bash[23232]: audit 2026-03-08T22:59:49.507585+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:49 vm11 bash[23232]: audit 2026-03-08T22:59:49.507585+0000 mon.a (mon.0) 389 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: cluster 2026-03-08T22:59:49.466229+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: cluster 2026-03-08T22:59:49.466229+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: audit 2026-03-08T22:59:49.673457+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: audit 2026-03-08T22:59:49.673457+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: cluster 2026-03-08T22:59:49.678118+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: cluster 2026-03-08T22:59:49.678118+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: audit 2026-03-08T22:59:49.678995+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:50 vm06 bash[20625]: audit 2026-03-08T22:59:49.678995+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: cluster 2026-03-08T22:59:49.466229+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:51.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: cluster 2026-03-08T22:59:49.466229+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:51.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: audit 2026-03-08T22:59:49.673457+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:51.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: audit 2026-03-08T22:59:49.673457+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:51.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: cluster 2026-03-08T22:59:49.678118+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T22:59:51.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: cluster 2026-03-08T22:59:49.678118+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T22:59:51.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: audit 2026-03-08T22:59:49.678995+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:51.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:50 vm06 bash[27746]: audit 2026-03-08T22:59:49.678995+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: cluster 2026-03-08T22:59:49.466229+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: cluster 2026-03-08T22:59:49.466229+0000 mgr.y (mgr.14150) 119 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: audit 2026-03-08T22:59:49.673457+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: audit 2026-03-08T22:59:49.673457+0000 mon.a (mon.0) 390 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: cluster 2026-03-08T22:59:49.678118+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: cluster 2026-03-08T22:59:49.678118+0000 mon.a (mon.0) 391 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: audit 2026-03-08T22:59:49.678995+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:50 vm11 bash[23232]: audit 2026-03-08T22:59:49.678995+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:50.676968+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:50.676968+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: cluster 2026-03-08T22:59:50.682936+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: cluster 2026-03-08T22:59:50.682936+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.349040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.349040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369001+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369001+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369467+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369467+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369530+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369530+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369573+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.369573+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371362+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371362+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371408+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371408+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371416+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371416+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371443+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.371443+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.388657+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.388657+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.390154+0000 mon.c (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.390154+0000 mon.c (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.397253+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.397253+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.397319+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.397319+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.397596+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.397596+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.409479+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:51 vm06 bash[20625]: audit 2026-03-08T22:59:51.409479+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:50.676968+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:50.676968+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: cluster 2026-03-08T22:59:50.682936+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T22:59:52.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: cluster 2026-03-08T22:59:50.682936+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.349040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.349040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369001+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369001+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369467+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369467+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369530+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369530+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369573+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.369573+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371362+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371362+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371408+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371408+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371416+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371416+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371443+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.371443+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.388657+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.388657+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.390154+0000 mon.c (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.390154+0000 mon.c (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.397253+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.397253+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.397319+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.397319+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.397596+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.397596+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.409479+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.032 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:51 vm06 bash[27746]: audit 2026-03-08T22:59:51.409479+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:50.676968+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:50.676968+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: cluster 2026-03-08T22:59:50.682936+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: cluster 2026-03-08T22:59:50.682936+0000 mon.a (mon.0) 394 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.349040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.349040+0000 mon.a (mon.0) 395 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369001+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369001+0000 mon.a (mon.0) 396 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369467+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369467+0000 mon.a (mon.0) 397 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369530+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369530+0000 mon.a (mon.0) 398 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369573+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.369573+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371362+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371362+0000 mon.a (mon.0) 400 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371408+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371408+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371416+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371416+0000 mon.b (mon.1) 8 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371443+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.371443+0000 mon.a (mon.0) 402 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.388657+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.388657+0000 mon.b (mon.1) 9 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.390154+0000 mon.c (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.390154+0000 mon.c (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.397253+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.397253+0000 mon.a (mon.0) 403 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.397319+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.397319+0000 mon.a (mon.0) 404 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T22:59:52.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.397596+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.397596+0000 mon.a (mon.0) 405 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T22:59:52.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.409479+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:51 vm11 bash[23232]: audit 2026-03-08T22:59:51.409479+0000 mon.c (mon.2) 7 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-08T22:59:52.507 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:59:52.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:52 vm06 bash[20625]: cluster 2026-03-08T22:59:51.466576+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:52.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:52 vm06 bash[20625]: cluster 2026-03-08T22:59:51.466576+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:52.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:52 vm06 bash[20625]: cluster 2026-03-08T22:59:51.741203+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-08T22:59:52.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:52 vm06 bash[20625]: cluster 2026-03-08T22:59:51.741203+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-08T22:59:52.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:52 vm06 bash[20625]: cluster 2026-03-08T22:59:51.741386+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:52 vm06 bash[20625]: cluster 2026-03-08T22:59:51.741386+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:52 vm06 bash[27746]: cluster 2026-03-08T22:59:51.466576+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:52 vm06 bash[27746]: cluster 2026-03-08T22:59:51.466576+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:52 vm06 bash[27746]: cluster 2026-03-08T22:59:51.741203+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:52 vm06 bash[27746]: cluster 2026-03-08T22:59:51.741203+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:52 vm06 bash[27746]: cluster 2026-03-08T22:59:51.741386+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-08T22:59:52.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:52 vm06 bash[27746]: cluster 2026-03-08T22:59:51.741386+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-08T22:59:53.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:52 vm11 bash[23232]: cluster 2026-03-08T22:59:51.466576+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:53.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:52 vm11 bash[23232]: cluster 2026-03-08T22:59:51.466576+0000 mgr.y (mgr.14150) 120 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:53.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:52 vm11 bash[23232]: cluster 2026-03-08T22:59:51.741203+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-08T22:59:53.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:52 vm11 bash[23232]: cluster 2026-03-08T22:59:51.741203+0000 mon.a (mon.0) 406 : cluster [DBG] mgrmap e15: y(active, since 2m), standbys: x 2026-03-08T22:59:53.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:52 vm11 bash[23232]: cluster 2026-03-08T22:59:51.741386+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-08T22:59:53.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:52 vm11 bash[23232]: cluster 2026-03-08T22:59:51.741386+0000 mon.a (mon.0) 407 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-08T22:59:53.509 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T22:59:53.530 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm06:/dev/vdb 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: cluster 2026-03-08T22:59:53.466917+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: cluster 2026-03-08T22:59:53.466917+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: cephadm 2026-03-08T22:59:54.278331+0000 mgr.y (mgr.14150) 122 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: cephadm 2026-03-08T22:59:54.278331+0000 mgr.y (mgr.14150) 122 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.284339+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.284339+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.289418+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.289418+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.290374+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.290374+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.290942+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.290942+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.291424+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.291424+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.295363+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:55 vm11 bash[23232]: audit 2026-03-08T22:59:54.295363+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: cluster 2026-03-08T22:59:53.466917+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: cluster 2026-03-08T22:59:53.466917+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: cephadm 2026-03-08T22:59:54.278331+0000 mgr.y (mgr.14150) 122 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: cephadm 2026-03-08T22:59:54.278331+0000 mgr.y (mgr.14150) 122 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.284339+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.284339+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.289418+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.289418+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.290374+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.290374+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.290942+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.290942+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.291424+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.291424+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.295363+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:55 vm06 bash[20625]: audit 2026-03-08T22:59:54.295363+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: cluster 2026-03-08T22:59:53.466917+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: cluster 2026-03-08T22:59:53.466917+0000 mgr.y (mgr.14150) 121 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: cephadm 2026-03-08T22:59:54.278331+0000 mgr.y (mgr.14150) 122 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: cephadm 2026-03-08T22:59:54.278331+0000 mgr.y (mgr.14150) 122 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.284339+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.284339+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.289418+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.289418+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.290374+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.290374+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.290942+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.290942+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.291424+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.291424+0000 mon.a (mon.0) 412 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.295363+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:55.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:55 vm06 bash[27746]: audit 2026-03-08T22:59:54.295363+0000 mon.a (mon.0) 413 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T22:59:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:57 vm11 bash[23232]: cluster 2026-03-08T22:59:55.467209+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:57 vm11 bash[23232]: cluster 2026-03-08T22:59:55.467209+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:57.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:57 vm06 bash[20625]: cluster 2026-03-08T22:59:55.467209+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:57.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:57 vm06 bash[20625]: cluster 2026-03-08T22:59:55.467209+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:57.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:57 vm06 bash[27746]: cluster 2026-03-08T22:59:55.467209+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:57.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:57 vm06 bash[27746]: cluster 2026-03-08T22:59:55.467209+0000 mgr.y (mgr.14150) 123 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:58.188 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T22:59:58.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:58 vm06 bash[20625]: cluster 2026-03-08T22:59:57.467445+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:58.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:58 vm06 bash[20625]: cluster 2026-03-08T22:59:57.467445+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:58.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:58 vm06 bash[27746]: cluster 2026-03-08T22:59:57.467445+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:58.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:58 vm06 bash[27746]: cluster 2026-03-08T22:59:57.467445+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:58.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:58 vm11 bash[23232]: cluster 2026-03-08T22:59:57.467445+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:58.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:58 vm11 bash[23232]: cluster 2026-03-08T22:59:57.467445+0000 mgr.y (mgr.14150) 124 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.629766+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.629766+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.631113+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.631113+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.632368+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.632368+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.632759+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 22:59:59 vm06 bash[20625]: audit 2026-03-08T22:59:58.632759+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:59.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.629766+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.629766+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.631113+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.631113+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.632368+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.632368+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.632759+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:59.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 22:59:59 vm06 bash[27746]: audit 2026-03-08T22:59:58.632759+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.629766+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.629766+0000 mgr.y (mgr.14150) 125 : audit [DBG] from='client.24181 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm06:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.631113+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.631113+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.632368+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.632368+0000 mon.a (mon.0) 415 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.632759+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T22:59:59.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 22:59:59 vm11 bash[23232]: audit 2026-03-08T22:59:58.632759+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:00.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:00 vm06 bash[20625]: cluster 2026-03-08T22:59:59.467735+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:00.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:00 vm06 bash[20625]: cluster 2026-03-08T22:59:59.467735+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:00.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:00 vm06 bash[20625]: cluster 2026-03-08T23:00:00.000180+0000 mon.a (mon.0) 417 : cluster [INF] overall HEALTH_OK 2026-03-08T23:00:00.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:00 vm06 bash[20625]: cluster 2026-03-08T23:00:00.000180+0000 mon.a (mon.0) 417 : cluster [INF] overall HEALTH_OK 2026-03-08T23:00:00.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:00 vm06 bash[27746]: cluster 2026-03-08T22:59:59.467735+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:00.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:00 vm06 bash[27746]: cluster 2026-03-08T22:59:59.467735+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:00.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:00 vm06 bash[27746]: cluster 2026-03-08T23:00:00.000180+0000 mon.a (mon.0) 417 : cluster [INF] overall HEALTH_OK 2026-03-08T23:00:00.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:00 vm06 bash[27746]: cluster 2026-03-08T23:00:00.000180+0000 mon.a (mon.0) 417 : cluster [INF] overall HEALTH_OK 2026-03-08T23:00:00.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:00 vm11 bash[23232]: cluster 2026-03-08T22:59:59.467735+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:00.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:00 vm11 bash[23232]: cluster 2026-03-08T22:59:59.467735+0000 mgr.y (mgr.14150) 126 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:00.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:00 vm11 bash[23232]: cluster 2026-03-08T23:00:00.000180+0000 mon.a (mon.0) 417 : cluster [INF] overall HEALTH_OK 2026-03-08T23:00:00.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:00 vm11 bash[23232]: cluster 2026-03-08T23:00:00.000180+0000 mon.a (mon.0) 417 : cluster [INF] overall HEALTH_OK 2026-03-08T23:00:02.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:02 vm11 bash[23232]: cluster 2026-03-08T23:00:01.467965+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:02.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:02 vm11 bash[23232]: cluster 2026-03-08T23:00:01.467965+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:03.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:02 vm06 bash[20625]: cluster 2026-03-08T23:00:01.467965+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:03.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:02 vm06 bash[20625]: cluster 2026-03-08T23:00:01.467965+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:03.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:02 vm06 bash[27746]: cluster 2026-03-08T23:00:01.467965+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:03.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:02 vm06 bash[27746]: cluster 2026-03-08T23:00:01.467965+0000 mgr.y (mgr.14150) 127 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: cluster 2026-03-08T23:00:03.468209+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: cluster 2026-03-08T23:00:03.468209+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: audit 2026-03-08T23:00:04.026970+0000 mon.a (mon.0) 418 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]: dispatch 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: audit 2026-03-08T23:00:04.026970+0000 mon.a (mon.0) 418 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]: dispatch 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: audit 2026-03-08T23:00:04.034528+0000 mon.a (mon.0) 419 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]': finished 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: audit 2026-03-08T23:00:04.034528+0000 mon.a (mon.0) 419 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]': finished 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: cluster 2026-03-08T23:00:04.037419+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: cluster 2026-03-08T23:00:04.037419+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: audit 2026-03-08T23:00:04.037587+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:04 vm06 bash[20625]: audit 2026-03-08T23:00:04.037587+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: cluster 2026-03-08T23:00:03.468209+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: cluster 2026-03-08T23:00:03.468209+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: audit 2026-03-08T23:00:04.026970+0000 mon.a (mon.0) 418 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]: dispatch 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: audit 2026-03-08T23:00:04.026970+0000 mon.a (mon.0) 418 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]: dispatch 2026-03-08T23:00:04.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: audit 2026-03-08T23:00:04.034528+0000 mon.a (mon.0) 419 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]': finished 2026-03-08T23:00:04.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: audit 2026-03-08T23:00:04.034528+0000 mon.a (mon.0) 419 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]': finished 2026-03-08T23:00:04.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: cluster 2026-03-08T23:00:04.037419+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:00:04.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: cluster 2026-03-08T23:00:04.037419+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:00:04.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: audit 2026-03-08T23:00:04.037587+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:04.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:04 vm06 bash[27746]: audit 2026-03-08T23:00:04.037587+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: cluster 2026-03-08T23:00:03.468209+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: cluster 2026-03-08T23:00:03.468209+0000 mgr.y (mgr.14150) 128 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: audit 2026-03-08T23:00:04.026970+0000 mon.a (mon.0) 418 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]: dispatch 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: audit 2026-03-08T23:00:04.026970+0000 mon.a (mon.0) 418 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]: dispatch 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: audit 2026-03-08T23:00:04.034528+0000 mon.a (mon.0) 419 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]': finished 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: audit 2026-03-08T23:00:04.034528+0000 mon.a (mon.0) 419 : audit [INF] from='client.? 192.168.123.106:0/156983036' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "19da1389-a7b0-483c-b2d4-8be50f26c1c4"}]': finished 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: cluster 2026-03-08T23:00:04.037419+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: cluster 2026-03-08T23:00:04.037419+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e23: 4 total, 3 up, 4 in 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: audit 2026-03-08T23:00:04.037587+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:04 vm11 bash[23232]: audit 2026-03-08T23:00:04.037587+0000 mon.a (mon.0) 421 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:06.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:05 vm06 bash[20625]: audit 2026-03-08T23:00:04.700798+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.106:0/1658854119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:06.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:05 vm06 bash[20625]: audit 2026-03-08T23:00:04.700798+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.106:0/1658854119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:06.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:05 vm06 bash[27746]: audit 2026-03-08T23:00:04.700798+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.106:0/1658854119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:06.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:05 vm06 bash[27746]: audit 2026-03-08T23:00:04.700798+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.106:0/1658854119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:06.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:05 vm11 bash[23232]: audit 2026-03-08T23:00:04.700798+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.106:0/1658854119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:06.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:05 vm11 bash[23232]: audit 2026-03-08T23:00:04.700798+0000 mon.c (mon.2) 8 : audit [DBG] from='client.? 192.168.123.106:0/1658854119' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:06 vm06 bash[20625]: cluster 2026-03-08T23:00:05.468449+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:06 vm06 bash[20625]: cluster 2026-03-08T23:00:05.468449+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:06 vm06 bash[27746]: cluster 2026-03-08T23:00:05.468449+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:06 vm06 bash[27746]: cluster 2026-03-08T23:00:05.468449+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:07.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:06 vm11 bash[23232]: cluster 2026-03-08T23:00:05.468449+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:07.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:06 vm11 bash[23232]: cluster 2026-03-08T23:00:05.468449+0000 mgr.y (mgr.14150) 129 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:09.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:08 vm06 bash[20625]: cluster 2026-03-08T23:00:07.468806+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:09.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:08 vm06 bash[20625]: cluster 2026-03-08T23:00:07.468806+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:09.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:08 vm06 bash[27746]: cluster 2026-03-08T23:00:07.468806+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:09.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:08 vm06 bash[27746]: cluster 2026-03-08T23:00:07.468806+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:08 vm11 bash[23232]: cluster 2026-03-08T23:00:07.468806+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:08 vm11 bash[23232]: cluster 2026-03-08T23:00:07.468806+0000 mgr.y (mgr.14150) 130 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:11.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:10 vm06 bash[20625]: cluster 2026-03-08T23:00:09.469127+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:11.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:10 vm06 bash[20625]: cluster 2026-03-08T23:00:09.469127+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:11.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:10 vm06 bash[27746]: cluster 2026-03-08T23:00:09.469127+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:11.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:10 vm06 bash[27746]: cluster 2026-03-08T23:00:09.469127+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:11.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:10 vm11 bash[23232]: cluster 2026-03-08T23:00:09.469127+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:11.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:10 vm11 bash[23232]: cluster 2026-03-08T23:00:09.469127+0000 mgr.y (mgr.14150) 131 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:12.914 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:12 vm06 bash[20625]: cluster 2026-03-08T23:00:11.469410+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:12.914 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:12 vm06 bash[20625]: cluster 2026-03-08T23:00:11.469410+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:12.914 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:12 vm06 bash[27746]: cluster 2026-03-08T23:00:11.469410+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:12.914 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:12 vm06 bash[27746]: cluster 2026-03-08T23:00:11.469410+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:12 vm11 bash[23232]: cluster 2026-03-08T23:00:11.469410+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:12 vm11 bash[23232]: cluster 2026-03-08T23:00:11.469410+0000 mgr.y (mgr.14150) 132 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:13 vm06 bash[20625]: audit 2026-03-08T23:00:13.331965+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:13 vm06 bash[20625]: audit 2026-03-08T23:00:13.331965+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:13 vm06 bash[20625]: audit 2026-03-08T23:00:13.332865+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:13 vm06 bash[20625]: audit 2026-03-08T23:00:13.332865+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:13 vm06 bash[27746]: audit 2026-03-08T23:00:13.331965+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:13 vm06 bash[27746]: audit 2026-03-08T23:00:13.331965+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:13 vm06 bash[27746]: audit 2026-03-08T23:00:13.332865+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:13.841 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:13 vm06 bash[27746]: audit 2026-03-08T23:00:13.332865+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:13 vm11 bash[23232]: audit 2026-03-08T23:00:13.331965+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:00:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:13 vm11 bash[23232]: audit 2026-03-08T23:00:13.331965+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.3"}]: dispatch 2026-03-08T23:00:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:13 vm11 bash[23232]: audit 2026-03-08T23:00:13.332865+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:13 vm11 bash[23232]: audit 2026-03-08T23:00:13.332865+0000 mon.a (mon.0) 423 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:14.408 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.409 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.409 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.409 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.409 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.409 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: cephadm 2026-03-08T23:00:13.333604+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm06 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: cephadm 2026-03-08T23:00:13.333604+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm06 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: cluster 2026-03-08T23:00:13.469710+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: cluster 2026-03-08T23:00:13.469710+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: audit 2026-03-08T23:00:14.521356+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: audit 2026-03-08T23:00:14.521356+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: audit 2026-03-08T23:00:14.532695+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: audit 2026-03-08T23:00:14.532695+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: audit 2026-03-08T23:00:14.540456+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:14 vm06 bash[20625]: audit 2026-03-08T23:00:14.540456+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: cephadm 2026-03-08T23:00:13.333604+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm06 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: cephadm 2026-03-08T23:00:13.333604+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm06 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: cluster 2026-03-08T23:00:13.469710+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: cluster 2026-03-08T23:00:13.469710+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: audit 2026-03-08T23:00:14.521356+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: audit 2026-03-08T23:00:14.521356+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: audit 2026-03-08T23:00:14.532695+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: audit 2026-03-08T23:00:14.532695+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: audit 2026-03-08T23:00:14.540456+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:14 vm06 bash[27746]: audit 2026-03-08T23:00:14.540456+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:14.757 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.757 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:14.757 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:00:14 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: cephadm 2026-03-08T23:00:13.333604+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm06 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: cephadm 2026-03-08T23:00:13.333604+0000 mgr.y (mgr.14150) 133 : cephadm [INF] Deploying daemon osd.3 on vm06 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: cluster 2026-03-08T23:00:13.469710+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: cluster 2026-03-08T23:00:13.469710+0000 mgr.y (mgr.14150) 134 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: audit 2026-03-08T23:00:14.521356+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: audit 2026-03-08T23:00:14.521356+0000 mon.a (mon.0) 424 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: audit 2026-03-08T23:00:14.532695+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: audit 2026-03-08T23:00:14.532695+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: audit 2026-03-08T23:00:14.540456+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:15.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:14 vm11 bash[23232]: audit 2026-03-08T23:00:14.540456+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:17.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:16 vm06 bash[20625]: cluster 2026-03-08T23:00:15.469946+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:17.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:16 vm06 bash[20625]: cluster 2026-03-08T23:00:15.469946+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:17.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:16 vm06 bash[27746]: cluster 2026-03-08T23:00:15.469946+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:17.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:16 vm06 bash[27746]: cluster 2026-03-08T23:00:15.469946+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:16 vm11 bash[23232]: cluster 2026-03-08T23:00:15.469946+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:16 vm11 bash[23232]: cluster 2026-03-08T23:00:15.469946+0000 mgr.y (mgr.14150) 135 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:18 vm06 bash[20625]: cluster 2026-03-08T23:00:17.470212+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:18 vm06 bash[20625]: cluster 2026-03-08T23:00:17.470212+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:18 vm06 bash[20625]: audit 2026-03-08T23:00:18.446387+0000 mon.c (mon.2) 9 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:18 vm06 bash[20625]: audit 2026-03-08T23:00:18.446387+0000 mon.c (mon.2) 9 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:18 vm06 bash[20625]: audit 2026-03-08T23:00:18.446670+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:18 vm06 bash[20625]: audit 2026-03-08T23:00:18.446670+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:18 vm06 bash[27746]: cluster 2026-03-08T23:00:17.470212+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:18 vm06 bash[27746]: cluster 2026-03-08T23:00:17.470212+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:18 vm06 bash[27746]: audit 2026-03-08T23:00:18.446387+0000 mon.c (mon.2) 9 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:18 vm06 bash[27746]: audit 2026-03-08T23:00:18.446387+0000 mon.c (mon.2) 9 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:18 vm06 bash[27746]: audit 2026-03-08T23:00:18.446670+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:18.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:18 vm06 bash[27746]: audit 2026-03-08T23:00:18.446670+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:18 vm11 bash[23232]: cluster 2026-03-08T23:00:17.470212+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:18 vm11 bash[23232]: cluster 2026-03-08T23:00:17.470212+0000 mgr.y (mgr.14150) 136 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:18 vm11 bash[23232]: audit 2026-03-08T23:00:18.446387+0000 mon.c (mon.2) 9 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:18 vm11 bash[23232]: audit 2026-03-08T23:00:18.446387+0000 mon.c (mon.2) 9 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:18 vm11 bash[23232]: audit 2026-03-08T23:00:18.446670+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:18 vm11 bash[23232]: audit 2026-03-08T23:00:18.446670+0000 mon.a (mon.0) 427 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.668380+0000 mon.a (mon.0) 428 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.668380+0000 mon.a (mon.0) 428 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: cluster 2026-03-08T23:00:18.671438+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: cluster 2026-03-08T23:00:18.671438+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.672393+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.672393+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.674142+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.674142+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.680099+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:19 vm06 bash[20625]: audit 2026-03-08T23:00:18.680099+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.668380+0000 mon.a (mon.0) 428 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.668380+0000 mon.a (mon.0) 428 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: cluster 2026-03-08T23:00:18.671438+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: cluster 2026-03-08T23:00:18.671438+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.672393+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.672393+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.674142+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.674142+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.680099+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:19 vm06 bash[27746]: audit 2026-03-08T23:00:18.680099+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.668380+0000 mon.a (mon.0) 428 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.668380+0000 mon.a (mon.0) 428 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["3"]}]': finished 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: cluster 2026-03-08T23:00:18.671438+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: cluster 2026-03-08T23:00:18.671438+0000 mon.a (mon.0) 429 : cluster [DBG] osdmap e24: 4 total, 3 up, 4 in 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.672393+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.672393+0000 mon.a (mon.0) 430 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.674142+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.674142+0000 mon.c (mon.2) 10 : audit [INF] from='osd.3 v2:192.168.123.106:6813/3847325262' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.680099+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:19 vm11 bash[23232]: audit 2026-03-08T23:00:18.680099+0000 mon.a (mon.0) 431 : audit [INF] from='osd.3 ' entity='osd.3' cmd=[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: cluster 2026-03-08T23:00:19.470493+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: cluster 2026-03-08T23:00:19.470493+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: audit 2026-03-08T23:00:19.677619+0000 mon.a (mon.0) 432 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: audit 2026-03-08T23:00:19.677619+0000 mon.a (mon.0) 432 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: cluster 2026-03-08T23:00:19.680832+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: cluster 2026-03-08T23:00:19.680832+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: audit 2026-03-08T23:00:19.681928+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: audit 2026-03-08T23:00:19.681928+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: audit 2026-03-08T23:00:19.695398+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:20 vm06 bash[27746]: audit 2026-03-08T23:00:19.695398+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: cluster 2026-03-08T23:00:19.470493+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: cluster 2026-03-08T23:00:19.470493+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: audit 2026-03-08T23:00:19.677619+0000 mon.a (mon.0) 432 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: audit 2026-03-08T23:00:19.677619+0000 mon.a (mon.0) 432 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: cluster 2026-03-08T23:00:19.680832+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: cluster 2026-03-08T23:00:19.680832+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: audit 2026-03-08T23:00:19.681928+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: audit 2026-03-08T23:00:19.681928+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: audit 2026-03-08T23:00:19.695398+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:20.875 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:20 vm06 bash[20625]: audit 2026-03-08T23:00:19.695398+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: cluster 2026-03-08T23:00:19.470493+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: cluster 2026-03-08T23:00:19.470493+0000 mgr.y (mgr.14150) 137 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: audit 2026-03-08T23:00:19.677619+0000 mon.a (mon.0) 432 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: audit 2026-03-08T23:00:19.677619+0000 mon.a (mon.0) 432 : audit [INF] from='osd.3 ' entity='osd.3' cmd='[{"prefix": "osd crush create-or-move", "id": 3, "weight":0.0195, "args": ["host=vm06", "root=default"]}]': finished 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: cluster 2026-03-08T23:00:19.680832+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: cluster 2026-03-08T23:00:19.680832+0000 mon.a (mon.0) 433 : cluster [DBG] osdmap e25: 4 total, 3 up, 4 in 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: audit 2026-03-08T23:00:19.681928+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: audit 2026-03-08T23:00:19.681928+0000 mon.a (mon.0) 434 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: audit 2026-03-08T23:00:19.695398+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:20 vm11 bash[23232]: audit 2026-03-08T23:00:19.695398+0000 mon.a (mon.0) 435 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:19.486873+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:19.486873+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:19.486920+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:19.486920+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.686845+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.686845+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:20.691241+0000 mon.a (mon.0) 437 : cluster [INF] osd.3 v2:192.168.123.106:6813/3847325262 boot 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:20.691241+0000 mon.a (mon.0) 437 : cluster [INF] osd.3 v2:192.168.123.106:6813/3847325262 boot 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:20.691317+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: cluster 2026-03-08T23:00:20.691317+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.692884+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.692884+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.917792+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.917792+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.923048+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.923048+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.923755+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.923755+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.924957+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.924957+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.928201+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:21 vm06 bash[20625]: audit 2026-03-08T23:00:20.928201+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:19.486873+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:19.486873+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:19.486920+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:19.486920+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.686845+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.866 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.686845+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:20.691241+0000 mon.a (mon.0) 437 : cluster [INF] osd.3 v2:192.168.123.106:6813/3847325262 boot 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:20.691241+0000 mon.a (mon.0) 437 : cluster [INF] osd.3 v2:192.168.123.106:6813/3847325262 boot 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:20.691317+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: cluster 2026-03-08T23:00:20.691317+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.692884+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.692884+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.917792+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.917792+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.923048+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.923048+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.923755+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.923755+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.924957+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.924957+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.928201+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.867 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:21 vm06 bash[27746]: audit 2026-03-08T23:00:20.928201+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:21.925 INFO:teuthology.orchestra.run.vm06.stdout:Created osd(s) 3 on host 'vm06' 2026-03-08T23:00:22.002 DEBUG:teuthology.orchestra.run.vm06:osd.3> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.3.service 2026-03-08T23:00:22.003 INFO:tasks.cephadm:Deploying osd.4 on vm11 with /dev/vde... 2026-03-08T23:00:22.003 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vde 2026-03-08T23:00:22.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:19.486873+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:22.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:19.486873+0000 osd.3 (osd.3) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:22.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:19.486920+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:22.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:19.486920+0000 osd.3 (osd.3) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:22.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.686845+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:22.009 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.686845+0000 mon.a (mon.0) 436 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:20.691241+0000 mon.a (mon.0) 437 : cluster [INF] osd.3 v2:192.168.123.106:6813/3847325262 boot 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:20.691241+0000 mon.a (mon.0) 437 : cluster [INF] osd.3 v2:192.168.123.106:6813/3847325262 boot 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:20.691317+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: cluster 2026-03-08T23:00:20.691317+0000 mon.a (mon.0) 438 : cluster [DBG] osdmap e26: 4 total, 4 up, 4 in 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.692884+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.692884+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.917792+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.917792+0000 mon.a (mon.0) 440 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.923048+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.923048+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.923755+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.923755+0000 mon.a (mon.0) 442 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.924957+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.924957+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.928201+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:22.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:21 vm11 bash[23232]: audit 2026-03-08T23:00:20.928201+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: cluster 2026-03-08T23:00:21.470789+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: cluster 2026-03-08T23:00:21.470789+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: audit 2026-03-08T23:00:21.912398+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: audit 2026-03-08T23:00:21.912398+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: audit 2026-03-08T23:00:21.916780+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: audit 2026-03-08T23:00:21.916780+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: audit 2026-03-08T23:00:21.920945+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: audit 2026-03-08T23:00:21.920945+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: cluster 2026-03-08T23:00:21.949624+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:22 vm06 bash[20625]: cluster 2026-03-08T23:00:21.949624+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: cluster 2026-03-08T23:00:21.470789+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: cluster 2026-03-08T23:00:21.470789+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: audit 2026-03-08T23:00:21.912398+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: audit 2026-03-08T23:00:21.912398+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: audit 2026-03-08T23:00:21.916780+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: audit 2026-03-08T23:00:21.916780+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: audit 2026-03-08T23:00:21.920945+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: audit 2026-03-08T23:00:21.920945+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: cluster 2026-03-08T23:00:21.949624+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:00:23.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:22 vm06 bash[27746]: cluster 2026-03-08T23:00:21.949624+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: cluster 2026-03-08T23:00:21.470789+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: cluster 2026-03-08T23:00:21.470789+0000 mgr.y (mgr.14150) 138 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: audit 2026-03-08T23:00:21.912398+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: audit 2026-03-08T23:00:21.912398+0000 mon.a (mon.0) 445 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: audit 2026-03-08T23:00:21.916780+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: audit 2026-03-08T23:00:21.916780+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: audit 2026-03-08T23:00:21.920945+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: audit 2026-03-08T23:00:21.920945+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: cluster 2026-03-08T23:00:21.949624+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:00:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:22 vm11 bash[23232]: cluster 2026-03-08T23:00:21.949624+0000 mon.a (mon.0) 448 : cluster [DBG] osdmap e27: 4 total, 4 up, 4 in 2026-03-08T23:00:25.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:24 vm06 bash[20625]: cluster 2026-03-08T23:00:23.471139+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:25.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:24 vm06 bash[20625]: cluster 2026-03-08T23:00:23.471139+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:25.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:24 vm06 bash[27746]: cluster 2026-03-08T23:00:23.471139+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:25.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:24 vm06 bash[27746]: cluster 2026-03-08T23:00:23.471139+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:24 vm11 bash[23232]: cluster 2026-03-08T23:00:23.471139+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:24 vm11 bash[23232]: cluster 2026-03-08T23:00:23.471139+0000 mgr.y (mgr.14150) 139 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:26.618 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:00:26.905 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:26 vm11 bash[23232]: cluster 2026-03-08T23:00:25.471423+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:26.905 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:26 vm11 bash[23232]: cluster 2026-03-08T23:00:25.471423+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:27.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:26 vm06 bash[20625]: cluster 2026-03-08T23:00:25.471423+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:27.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:26 vm06 bash[20625]: cluster 2026-03-08T23:00:25.471423+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:27.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:26 vm06 bash[27746]: cluster 2026-03-08T23:00:25.471423+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:27.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:26 vm06 bash[27746]: cluster 2026-03-08T23:00:25.471423+0000 mgr.y (mgr.14150) 140 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:27.557 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T23:00:27.577 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm11:/dev/vde 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: cluster 2026-03-08T23:00:27.471700+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: cluster 2026-03-08T23:00:27.471700+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: cephadm 2026-03-08T23:00:28.062258+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: cephadm 2026-03-08T23:00:28.062258+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.205489+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.205489+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.210242+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.210242+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.211539+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.211539+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.212150+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.212150+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.212563+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.212563+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.215988+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:29 vm06 bash[20625]: audit 2026-03-08T23:00:28.215988+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: cluster 2026-03-08T23:00:27.471700+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: cluster 2026-03-08T23:00:27.471700+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: cephadm 2026-03-08T23:00:28.062258+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: cephadm 2026-03-08T23:00:28.062258+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.205489+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.205489+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.210242+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.210242+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.211539+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.211539+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.212150+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.212150+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.212563+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.212563+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.215988+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:29 vm06 bash[27746]: audit 2026-03-08T23:00:28.215988+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: cluster 2026-03-08T23:00:27.471700+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: cluster 2026-03-08T23:00:27.471700+0000 mgr.y (mgr.14150) 141 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: cephadm 2026-03-08T23:00:28.062258+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: cephadm 2026-03-08T23:00:28.062258+0000 mgr.y (mgr.14150) 142 : cephadm [INF] Detected new or changed devices on vm06 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.205489+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.205489+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.210242+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.210242+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.211539+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.211539+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.212150+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.212150+0000 mon.a (mon.0) 452 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.212563+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.212563+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.215988+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:29 vm11 bash[23232]: audit 2026-03-08T23:00:28.215988+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:30.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:30 vm06 bash[20625]: cluster 2026-03-08T23:00:29.471990+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:30.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:30 vm06 bash[20625]: cluster 2026-03-08T23:00:29.471990+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:30.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:30 vm06 bash[27746]: cluster 2026-03-08T23:00:29.471990+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:30.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:30 vm06 bash[27746]: cluster 2026-03-08T23:00:29.471990+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:30.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:30 vm11 bash[23232]: cluster 2026-03-08T23:00:29.471990+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:30.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:30 vm11 bash[23232]: cluster 2026-03-08T23:00:29.471990+0000 mgr.y (mgr.14150) 143 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.247 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: cluster 2026-03-08T23:00:31.472224+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: cluster 2026-03-08T23:00:31.472224+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: audit 2026-03-08T23:00:32.515314+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: audit 2026-03-08T23:00:32.515314+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: audit 2026-03-08T23:00:32.517591+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: audit 2026-03-08T23:00:32.517591+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: audit 2026-03-08T23:00:32.518372+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:32.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:32 vm06 bash[20625]: audit 2026-03-08T23:00:32.518372+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: cluster 2026-03-08T23:00:31.472224+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: cluster 2026-03-08T23:00:31.472224+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: audit 2026-03-08T23:00:32.515314+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: audit 2026-03-08T23:00:32.515314+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: audit 2026-03-08T23:00:32.517591+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: audit 2026-03-08T23:00:32.517591+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: audit 2026-03-08T23:00:32.518372+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:32.781 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:32 vm06 bash[27746]: audit 2026-03-08T23:00:32.518372+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: cluster 2026-03-08T23:00:31.472224+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: cluster 2026-03-08T23:00:31.472224+0000 mgr.y (mgr.14150) 144 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: audit 2026-03-08T23:00:32.515314+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: audit 2026-03-08T23:00:32.515314+0000 mon.a (mon.0) 455 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: audit 2026-03-08T23:00:32.517591+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: audit 2026-03-08T23:00:32.517591+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: audit 2026-03-08T23:00:32.518372+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:32.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:32 vm11 bash[23232]: audit 2026-03-08T23:00:32.518372+0000 mon.a (mon.0) 457 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:33 vm11 bash[23232]: audit 2026-03-08T23:00:32.513015+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:00:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:33 vm11 bash[23232]: audit 2026-03-08T23:00:32.513015+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:00:34.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:33 vm06 bash[20625]: audit 2026-03-08T23:00:32.513015+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:00:34.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:33 vm06 bash[20625]: audit 2026-03-08T23:00:32.513015+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:00:34.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:33 vm06 bash[27746]: audit 2026-03-08T23:00:32.513015+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:00:34.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:33 vm06 bash[27746]: audit 2026-03-08T23:00:32.513015+0000 mgr.y (mgr.14150) 145 : audit [DBG] from='client.24199 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:00:34.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:34 vm11 bash[23232]: cluster 2026-03-08T23:00:33.472500+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:34.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:34 vm11 bash[23232]: cluster 2026-03-08T23:00:33.472500+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:35.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:34 vm06 bash[20625]: cluster 2026-03-08T23:00:33.472500+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:35.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:34 vm06 bash[20625]: cluster 2026-03-08T23:00:33.472500+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:34 vm06 bash[27746]: cluster 2026-03-08T23:00:33.472500+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:34 vm06 bash[27746]: cluster 2026-03-08T23:00:33.472500+0000 mgr.y (mgr.14150) 146 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:36.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:36 vm11 bash[23232]: cluster 2026-03-08T23:00:35.472780+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:36.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:36 vm11 bash[23232]: cluster 2026-03-08T23:00:35.472780+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:37.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:36 vm06 bash[20625]: cluster 2026-03-08T23:00:35.472780+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:37.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:36 vm06 bash[20625]: cluster 2026-03-08T23:00:35.472780+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:37.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:36 vm06 bash[27746]: cluster 2026-03-08T23:00:35.472780+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:37.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:36 vm06 bash[27746]: cluster 2026-03-08T23:00:35.472780+0000 mgr.y (mgr.14150) 147 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: cluster 2026-03-08T23:00:37.473042+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: cluster 2026-03-08T23:00:37.473042+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.968298+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.111:0/2903794930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.968298+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.111:0/2903794930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.968775+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.968775+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.972124+0000 mon.a (mon.0) 459 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]': finished 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.972124+0000 mon.a (mon.0) 459 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]': finished 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: cluster 2026-03-08T23:00:37.977102+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: cluster 2026-03-08T23:00:37.977102+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.977387+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:38 vm11 bash[23232]: audit 2026-03-08T23:00:37.977387+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: cluster 2026-03-08T23:00:37.473042+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: cluster 2026-03-08T23:00:37.473042+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.968298+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.111:0/2903794930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.968298+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.111:0/2903794930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.968775+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.968775+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.972124+0000 mon.a (mon.0) 459 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]': finished 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.972124+0000 mon.a (mon.0) 459 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]': finished 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: cluster 2026-03-08T23:00:37.977102+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: cluster 2026-03-08T23:00:37.977102+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.977387+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:38 vm06 bash[20625]: audit 2026-03-08T23:00:37.977387+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: cluster 2026-03-08T23:00:37.473042+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: cluster 2026-03-08T23:00:37.473042+0000 mgr.y (mgr.14150) 148 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.968298+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.111:0/2903794930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.968298+0000 mon.b (mon.1) 10 : audit [INF] from='client.? 192.168.123.111:0/2903794930' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.968775+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.968775+0000 mon.a (mon.0) 458 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.972124+0000 mon.a (mon.0) 459 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]': finished 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.972124+0000 mon.a (mon.0) 459 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2b8b0ad5-79bc-4b4c-a515-bc6c029f416f"}]': finished 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: cluster 2026-03-08T23:00:37.977102+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: cluster 2026-03-08T23:00:37.977102+0000 mon.a (mon.0) 460 : cluster [DBG] osdmap e28: 5 total, 4 up, 5 in 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.977387+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:38 vm06 bash[27746]: audit 2026-03-08T23:00:37.977387+0000 mon.a (mon.0) 461 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:39 vm06 bash[20625]: audit 2026-03-08T23:00:38.630328+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.111:0/1252016163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:40.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:39 vm06 bash[20625]: audit 2026-03-08T23:00:38.630328+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.111:0/1252016163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:40.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:39 vm06 bash[27746]: audit 2026-03-08T23:00:38.630328+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.111:0/1252016163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:40.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:39 vm06 bash[27746]: audit 2026-03-08T23:00:38.630328+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.111:0/1252016163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:39 vm11 bash[23232]: audit 2026-03-08T23:00:38.630328+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.111:0/1252016163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:40.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:39 vm11 bash[23232]: audit 2026-03-08T23:00:38.630328+0000 mon.b (mon.1) 11 : audit [DBG] from='client.? 192.168.123.111:0/1252016163' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:00:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:40 vm06 bash[20625]: cluster 2026-03-08T23:00:39.473403+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:41.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:40 vm06 bash[20625]: cluster 2026-03-08T23:00:39.473403+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:41.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:40 vm06 bash[27746]: cluster 2026-03-08T23:00:39.473403+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:41.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:40 vm06 bash[27746]: cluster 2026-03-08T23:00:39.473403+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:40 vm11 bash[23232]: cluster 2026-03-08T23:00:39.473403+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:41.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:40 vm11 bash[23232]: cluster 2026-03-08T23:00:39.473403+0000 mgr.y (mgr.14150) 149 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:42 vm06 bash[20625]: cluster 2026-03-08T23:00:41.473690+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:42 vm06 bash[20625]: cluster 2026-03-08T23:00:41.473690+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:42 vm06 bash[27746]: cluster 2026-03-08T23:00:41.473690+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:42 vm06 bash[27746]: cluster 2026-03-08T23:00:41.473690+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:42 vm11 bash[23232]: cluster 2026-03-08T23:00:41.473690+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:42 vm11 bash[23232]: cluster 2026-03-08T23:00:41.473690+0000 mgr.y (mgr.14150) 150 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:44 vm06 bash[20625]: cluster 2026-03-08T23:00:43.473966+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:44 vm06 bash[20625]: cluster 2026-03-08T23:00:43.473966+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:44 vm06 bash[27746]: cluster 2026-03-08T23:00:43.473966+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:44 vm06 bash[27746]: cluster 2026-03-08T23:00:43.473966+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:44 vm11 bash[23232]: cluster 2026-03-08T23:00:43.473966+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:44 vm11 bash[23232]: cluster 2026-03-08T23:00:43.473966+0000 mgr.y (mgr.14150) 151 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:47.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:46 vm06 bash[20625]: cluster 2026-03-08T23:00:45.474240+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:47.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:46 vm06 bash[20625]: cluster 2026-03-08T23:00:45.474240+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:47.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:46 vm06 bash[27746]: cluster 2026-03-08T23:00:45.474240+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:47.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:46 vm06 bash[27746]: cluster 2026-03-08T23:00:45.474240+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:47.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:46 vm11 bash[23232]: cluster 2026-03-08T23:00:45.474240+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:47.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:46 vm11 bash[23232]: cluster 2026-03-08T23:00:45.474240+0000 mgr.y (mgr.14150) 152 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:48.016 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:47 vm11 bash[23232]: audit 2026-03-08T23:00:47.535511+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:00:48.016 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:47 vm11 bash[23232]: audit 2026-03-08T23:00:47.535511+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:00:48.016 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:47 vm11 bash[23232]: audit 2026-03-08T23:00:47.536112+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:48.016 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:47 vm11 bash[23232]: audit 2026-03-08T23:00:47.536112+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:47 vm06 bash[20625]: audit 2026-03-08T23:00:47.535511+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:47 vm06 bash[20625]: audit 2026-03-08T23:00:47.535511+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:47 vm06 bash[20625]: audit 2026-03-08T23:00:47.536112+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:47 vm06 bash[20625]: audit 2026-03-08T23:00:47.536112+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:47 vm06 bash[27746]: audit 2026-03-08T23:00:47.535511+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:47 vm06 bash[27746]: audit 2026-03-08T23:00:47.535511+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.4"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:47 vm06 bash[27746]: audit 2026-03-08T23:00:47.536112+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:48.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:47 vm06 bash[27746]: audit 2026-03-08T23:00:47.536112+0000 mon.a (mon.0) 463 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:48.608 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:48.608 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:48.608 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:00:48 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:48.608 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:00:48 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: cluster 2026-03-08T23:00:47.474508+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: cluster 2026-03-08T23:00:47.474508+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: cephadm 2026-03-08T23:00:47.536562+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: cephadm 2026-03-08T23:00:47.536562+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: audit 2026-03-08T23:00:48.640054+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: audit 2026-03-08T23:00:48.640054+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: audit 2026-03-08T23:00:48.645172+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: audit 2026-03-08T23:00:48.645172+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: audit 2026-03-08T23:00:48.651880+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:48.922 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:48 vm11 bash[23232]: audit 2026-03-08T23:00:48.651880+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: cluster 2026-03-08T23:00:47.474508+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: cluster 2026-03-08T23:00:47.474508+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: cephadm 2026-03-08T23:00:47.536562+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: cephadm 2026-03-08T23:00:47.536562+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: audit 2026-03-08T23:00:48.640054+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: audit 2026-03-08T23:00:48.640054+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: audit 2026-03-08T23:00:48.645172+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: audit 2026-03-08T23:00:48.645172+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: audit 2026-03-08T23:00:48.651880+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:48 vm06 bash[20625]: audit 2026-03-08T23:00:48.651880+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: cluster 2026-03-08T23:00:47.474508+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: cluster 2026-03-08T23:00:47.474508+0000 mgr.y (mgr.14150) 153 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: cephadm 2026-03-08T23:00:47.536562+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: cephadm 2026-03-08T23:00:47.536562+0000 mgr.y (mgr.14150) 154 : cephadm [INF] Deploying daemon osd.4 on vm11 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: audit 2026-03-08T23:00:48.640054+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: audit 2026-03-08T23:00:48.640054+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: audit 2026-03-08T23:00:48.645172+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: audit 2026-03-08T23:00:48.645172+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: audit 2026-03-08T23:00:48.651880+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:49.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:48 vm06 bash[27746]: audit 2026-03-08T23:00:48.651880+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:50 vm06 bash[20625]: cluster 2026-03-08T23:00:49.475254+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:51.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:50 vm06 bash[20625]: cluster 2026-03-08T23:00:49.475254+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:51.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:50 vm06 bash[27746]: cluster 2026-03-08T23:00:49.475254+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:51.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:50 vm06 bash[27746]: cluster 2026-03-08T23:00:49.475254+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:50 vm11 bash[23232]: cluster 2026-03-08T23:00:49.475254+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:50 vm11 bash[23232]: cluster 2026-03-08T23:00:49.475254+0000 mgr.y (mgr.14150) 155 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:53.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:52 vm06 bash[20625]: cluster 2026-03-08T23:00:51.475571+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:53.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:52 vm06 bash[20625]: cluster 2026-03-08T23:00:51.475571+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:52 vm06 bash[27746]: cluster 2026-03-08T23:00:51.475571+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:52 vm06 bash[27746]: cluster 2026-03-08T23:00:51.475571+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:52 vm11 bash[23232]: cluster 2026-03-08T23:00:51.475571+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:52 vm11 bash[23232]: cluster 2026-03-08T23:00:51.475571+0000 mgr.y (mgr.14150) 156 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:54 vm06 bash[20625]: audit 2026-03-08T23:00:52.989471+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:54 vm06 bash[20625]: audit 2026-03-08T23:00:52.989471+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:54 vm06 bash[20625]: audit 2026-03-08T23:00:52.991606+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:54 vm06 bash[20625]: audit 2026-03-08T23:00:52.991606+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:54 vm06 bash[27746]: audit 2026-03-08T23:00:52.989471+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:54 vm06 bash[27746]: audit 2026-03-08T23:00:52.989471+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:54 vm06 bash[27746]: audit 2026-03-08T23:00:52.991606+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:54 vm06 bash[27746]: audit 2026-03-08T23:00:52.991606+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:54 vm11 bash[23232]: audit 2026-03-08T23:00:52.989471+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:54 vm11 bash[23232]: audit 2026-03-08T23:00:52.989471+0000 mon.b (mon.1) 12 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:54 vm11 bash[23232]: audit 2026-03-08T23:00:52.991606+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:54.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:54 vm11 bash[23232]: audit 2026-03-08T23:00:52.991606+0000 mon.a (mon.0) 467 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: cluster 2026-03-08T23:00:53.476095+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: cluster 2026-03-08T23:00:53.476095+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.010582+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.010582+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.013839+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.013839+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: cluster 2026-03-08T23:00:54.014184+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: cluster 2026-03-08T23:00:54.014184+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.015269+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.015269+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.015437+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.015437+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.865345+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.865345+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.883191+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:55 vm11 bash[23232]: audit 2026-03-08T23:00:54.883191+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: cluster 2026-03-08T23:00:53.476095+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: cluster 2026-03-08T23:00:53.476095+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.010582+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.010582+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.013839+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.013839+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: cluster 2026-03-08T23:00:54.014184+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: cluster 2026-03-08T23:00:54.014184+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.015269+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.015269+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.015437+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.015437+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.865345+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.865345+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.883191+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:55 vm06 bash[20625]: audit 2026-03-08T23:00:54.883191+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: cluster 2026-03-08T23:00:53.476095+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: cluster 2026-03-08T23:00:53.476095+0000 mgr.y (mgr.14150) 157 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.010582+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.010582+0000 mon.a (mon.0) 468 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["4"]}]': finished 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.013839+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.013839+0000 mon.b (mon.1) 13 : audit [INF] from='osd.4 v2:192.168.123.111:6800/1718317342' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: cluster 2026-03-08T23:00:54.014184+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: cluster 2026-03-08T23:00:54.014184+0000 mon.a (mon.0) 469 : cluster [DBG] osdmap e29: 5 total, 4 up, 5 in 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.015269+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.015269+0000 mon.a (mon.0) 470 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.015437+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.015437+0000 mon.a (mon.0) 471 : audit [INF] from='osd.4 ' entity='osd.4' cmd=[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.865345+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.865345+0000 mon.a (mon.0) 472 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.883191+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:55.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:55 vm06 bash[27746]: audit 2026-03-08T23:00:54.883191+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.099 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 4 on host 'vm11' 2026-03-08T23:00:56.203 DEBUG:teuthology.orchestra.run.vm11:osd.4> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.4.service 2026-03-08T23:00:56.204 INFO:tasks.cephadm:Deploying osd.5 on vm11 with /dev/vdd... 2026-03-08T23:00:56.205 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vdd 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:54.034285+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:54.034285+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:54.034337+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:54.034337+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.022861+0000 mon.a (mon.0) 474 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.022861+0000 mon.a (mon.0) 474 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:55.029631+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:55.029631+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.032392+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.032392+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.042793+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.042793+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.302673+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.302673+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.303199+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.303199+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.332665+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.332665+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:55.958445+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: cluster 2026-03-08T23:00:55.958445+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.958570+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:55.958570+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:56.042609+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:56 vm11 bash[23232]: audit 2026-03-08T23:00:56.042609+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:54.034285+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:54.034285+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:54.034337+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:54.034337+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.022861+0000 mon.a (mon.0) 474 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.022861+0000 mon.a (mon.0) 474 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:55.029631+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:55.029631+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.032392+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.032392+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.042793+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.042793+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.302673+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.302673+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.303199+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.303199+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.332665+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.332665+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:55.958445+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: cluster 2026-03-08T23:00:55.958445+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.958570+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:55.958570+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:56.042609+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:56 vm06 bash[20625]: audit 2026-03-08T23:00:56.042609+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:54.034285+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:54.034285+0000 osd.4 (osd.4) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:54.034337+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:54.034337+0000 osd.4 (osd.4) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.022861+0000 mon.a (mon.0) 474 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.022861+0000 mon.a (mon.0) 474 : audit [INF] from='osd.4 ' entity='osd.4' cmd='[{"prefix": "osd crush create-or-move", "id": 4, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:55.029631+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:55.029631+0000 mon.a (mon.0) 475 : cluster [DBG] osdmap e30: 5 total, 4 up, 5 in 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.032392+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.032392+0000 mon.a (mon.0) 476 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.042793+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.042793+0000 mon.a (mon.0) 477 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.302673+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.302673+0000 mon.a (mon.0) 478 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.303199+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.303199+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.332665+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.332665+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:55.958445+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: cluster 2026-03-08T23:00:55.958445+0000 mon.a (mon.0) 481 : cluster [DBG] osdmap e31: 5 total, 4 up, 5 in 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.958570+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:55.958570+0000 mon.a (mon.0) 482 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:56.042609+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:56.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:56 vm06 bash[27746]: audit 2026-03-08T23:00:56.042609+0000 mon.a (mon.0) 483 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:57.057 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:00:56 vm11 bash[26565]: debug 2026-03-08T23:00:56.655+0000 7ffb6a821640 -1 osd.4 0 waiting for initial osdmap 2026-03-08T23:00:57.057 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:00:56 vm11 bash[26565]: debug 2026-03-08T23:00:56.663+0000 7ffb65637640 -1 osd.4 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-08T23:00:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: cluster 2026-03-08T23:00:55.476485+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: cluster 2026-03-08T23:00:55.476485+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.076173+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.076173+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.085796+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.085796+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.095983+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.095983+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.662123+0000 mon.a (mon.0) 487 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:57 vm06 bash[20625]: audit 2026-03-08T23:00:56.662123+0000 mon.a (mon.0) 487 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: cluster 2026-03-08T23:00:55.476485+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: cluster 2026-03-08T23:00:55.476485+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.076173+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.076173+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.085796+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.085796+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.095983+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.095983+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.662123+0000 mon.a (mon.0) 487 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:00:57.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:57 vm06 bash[27746]: audit 2026-03-08T23:00:56.662123+0000 mon.a (mon.0) 487 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: cluster 2026-03-08T23:00:55.476485+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: cluster 2026-03-08T23:00:55.476485+0000 mgr.y (mgr.14150) 158 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.076173+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.076173+0000 mon.a (mon.0) 484 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.085796+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.085796+0000 mon.a (mon.0) 485 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.095983+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.095983+0000 mon.a (mon.0) 486 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.662123+0000 mon.a (mon.0) 487 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:00:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:57 vm11 bash[23232]: audit 2026-03-08T23:00:56.662123+0000 mon.a (mon.0) 487 : audit [INF] from='osd.4 ' entity='osd.4' 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: audit 2026-03-08T23:00:57.134213+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: audit 2026-03-08T23:00:57.134213+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: cluster 2026-03-08T23:00:57.194062+0000 mon.a (mon.0) 489 : cluster [INF] osd.4 v2:192.168.123.111:6800/1718317342 boot 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: cluster 2026-03-08T23:00:57.194062+0000 mon.a (mon.0) 489 : cluster [INF] osd.4 v2:192.168.123.111:6800/1718317342 boot 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: cluster 2026-03-08T23:00:57.194187+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: cluster 2026-03-08T23:00:57.194187+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: audit 2026-03-08T23:00:57.195099+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:58 vm06 bash[20625]: audit 2026-03-08T23:00:57.195099+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: audit 2026-03-08T23:00:57.134213+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: audit 2026-03-08T23:00:57.134213+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: cluster 2026-03-08T23:00:57.194062+0000 mon.a (mon.0) 489 : cluster [INF] osd.4 v2:192.168.123.111:6800/1718317342 boot 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: cluster 2026-03-08T23:00:57.194062+0000 mon.a (mon.0) 489 : cluster [INF] osd.4 v2:192.168.123.111:6800/1718317342 boot 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: cluster 2026-03-08T23:00:57.194187+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: cluster 2026-03-08T23:00:57.194187+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: audit 2026-03-08T23:00:57.195099+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:58 vm06 bash[27746]: audit 2026-03-08T23:00:57.195099+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: audit 2026-03-08T23:00:57.134213+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: audit 2026-03-08T23:00:57.134213+0000 mon.a (mon.0) 488 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: cluster 2026-03-08T23:00:57.194062+0000 mon.a (mon.0) 489 : cluster [INF] osd.4 v2:192.168.123.111:6800/1718317342 boot 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: cluster 2026-03-08T23:00:57.194062+0000 mon.a (mon.0) 489 : cluster [INF] osd.4 v2:192.168.123.111:6800/1718317342 boot 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: cluster 2026-03-08T23:00:57.194187+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: cluster 2026-03-08T23:00:57.194187+0000 mon.a (mon.0) 490 : cluster [DBG] osdmap e32: 5 total, 5 up, 5 in 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: audit 2026-03-08T23:00:57.195099+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:58 vm11 bash[23232]: audit 2026-03-08T23:00:57.195099+0000 mon.a (mon.0) 491 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:00:59.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:59 vm06 bash[20625]: cluster 2026-03-08T23:00:57.476717+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:59.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:59 vm06 bash[20625]: cluster 2026-03-08T23:00:57.476717+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:59.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:59 vm06 bash[20625]: cluster 2026-03-08T23:00:58.203850+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:59 vm06 bash[20625]: cluster 2026-03-08T23:00:58.203850+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:59 vm06 bash[20625]: cluster 2026-03-08T23:00:59.213381+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:00:59 vm06 bash[20625]: cluster 2026-03-08T23:00:59.213381+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:59 vm06 bash[27746]: cluster 2026-03-08T23:00:57.476717+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:59 vm06 bash[27746]: cluster 2026-03-08T23:00:57.476717+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:59 vm06 bash[27746]: cluster 2026-03-08T23:00:58.203850+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:59 vm06 bash[27746]: cluster 2026-03-08T23:00:58.203850+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:59 vm06 bash[27746]: cluster 2026-03-08T23:00:59.213381+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-08T23:00:59.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:00:59 vm06 bash[27746]: cluster 2026-03-08T23:00:59.213381+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-08T23:00:59.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:59 vm11 bash[23232]: cluster 2026-03-08T23:00:57.476717+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:59.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:59 vm11 bash[23232]: cluster 2026-03-08T23:00:57.476717+0000 mgr.y (mgr.14150) 159 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 107 MiB used, 80 GiB / 80 GiB avail 2026-03-08T23:00:59.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:59 vm11 bash[23232]: cluster 2026-03-08T23:00:58.203850+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-08T23:00:59.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:59 vm11 bash[23232]: cluster 2026-03-08T23:00:58.203850+0000 mon.a (mon.0) 492 : cluster [DBG] osdmap e33: 5 total, 5 up, 5 in 2026-03-08T23:00:59.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:59 vm11 bash[23232]: cluster 2026-03-08T23:00:59.213381+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-08T23:00:59.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:00:59 vm11 bash[23232]: cluster 2026-03-08T23:00:59.213381+0000 mon.a (mon.0) 493 : cluster [DBG] osdmap e34: 5 total, 5 up, 5 in 2026-03-08T23:01:00.903 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:01:01.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:01 vm06 bash[20625]: cluster 2026-03-08T23:00:59.476956+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:01.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:01 vm06 bash[20625]: cluster 2026-03-08T23:00:59.476956+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:01.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:01 vm06 bash[27746]: cluster 2026-03-08T23:00:59.476956+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:01.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:01 vm06 bash[27746]: cluster 2026-03-08T23:00:59.476956+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:01.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:01 vm11 bash[23232]: cluster 2026-03-08T23:00:59.476956+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:01.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:01 vm11 bash[23232]: cluster 2026-03-08T23:00:59.476956+0000 mgr.y (mgr.14150) 160 : cluster [DBG] pgmap v138: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:02.526 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T23:01:02.548 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm11:/dev/vdd 2026-03-08T23:01:02.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:02 vm06 bash[20625]: cluster 2026-03-08T23:01:01.477195+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:02.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:02 vm06 bash[20625]: cluster 2026-03-08T23:01:01.477195+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:02.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:02 vm06 bash[27746]: cluster 2026-03-08T23:01:01.477195+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:02.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:02 vm06 bash[27746]: cluster 2026-03-08T23:01:01.477195+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:02.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:02 vm11 bash[23232]: cluster 2026-03-08T23:01:01.477195+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:02.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:02 vm11 bash[23232]: cluster 2026-03-08T23:01:01.477195+0000 mgr.y (mgr.14150) 161 : cluster [DBG] pgmap v139: 1 pgs: 1 remapped+peering; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:05.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cluster 2026-03-08T23:01:03.477466+0000 mgr.y (mgr.14150) 162 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 71 KiB/s, 0 objects/s recovering 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cluster 2026-03-08T23:01:03.477466+0000 mgr.y (mgr.14150) 162 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 71 KiB/s, 0 objects/s recovering 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cephadm 2026-03-08T23:01:03.681622+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cephadm 2026-03-08T23:01:03.681622+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.687123+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.687123+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.691429+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.691429+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.692882+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.692882+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cephadm 2026-03-08T23:01:03.693408+0000 mgr.y (mgr.14150) 164 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cephadm 2026-03-08T23:01:03.693408+0000 mgr.y (mgr.14150) 164 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cephadm 2026-03-08T23:01:03.693835+0000 mgr.y (mgr.14150) 165 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: cephadm 2026-03-08T23:01:03.693835+0000 mgr.y (mgr.14150) 165 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.694109+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.694109+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.694493+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.694493+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.697624+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:04 vm06 bash[20625]: audit 2026-03-08T23:01:03.697624+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cluster 2026-03-08T23:01:03.477466+0000 mgr.y (mgr.14150) 162 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 71 KiB/s, 0 objects/s recovering 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cluster 2026-03-08T23:01:03.477466+0000 mgr.y (mgr.14150) 162 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 71 KiB/s, 0 objects/s recovering 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cephadm 2026-03-08T23:01:03.681622+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cephadm 2026-03-08T23:01:03.681622+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.687123+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.687123+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.691429+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.691429+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.692882+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.692882+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cephadm 2026-03-08T23:01:03.693408+0000 mgr.y (mgr.14150) 164 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cephadm 2026-03-08T23:01:03.693408+0000 mgr.y (mgr.14150) 164 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cephadm 2026-03-08T23:01:03.693835+0000 mgr.y (mgr.14150) 165 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: cephadm 2026-03-08T23:01:03.693835+0000 mgr.y (mgr.14150) 165 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.694109+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.694109+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.694493+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.694493+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.697624+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.031 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:04 vm06 bash[27746]: audit 2026-03-08T23:01:03.697624+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cluster 2026-03-08T23:01:03.477466+0000 mgr.y (mgr.14150) 162 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 71 KiB/s, 0 objects/s recovering 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cluster 2026-03-08T23:01:03.477466+0000 mgr.y (mgr.14150) 162 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 71 KiB/s, 0 objects/s recovering 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cephadm 2026-03-08T23:01:03.681622+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cephadm 2026-03-08T23:01:03.681622+0000 mgr.y (mgr.14150) 163 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.687123+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.687123+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.691429+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.691429+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.692882+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.692882+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cephadm 2026-03-08T23:01:03.693408+0000 mgr.y (mgr.14150) 164 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cephadm 2026-03-08T23:01:03.693408+0000 mgr.y (mgr.14150) 164 : cephadm [INF] Adjusting osd_memory_target on vm11 to 455.7M 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cephadm 2026-03-08T23:01:03.693835+0000 mgr.y (mgr.14150) 165 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: cephadm 2026-03-08T23:01:03.693835+0000 mgr.y (mgr.14150) 165 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.694109+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.694109+0000 mon.a (mon.0) 497 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.694493+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.694493+0000 mon.a (mon.0) 498 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.697624+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:05.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:04 vm11 bash[23232]: audit 2026-03-08T23:01:03.697624+0000 mon.a (mon.0) 499 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:06 vm06 bash[20625]: cluster 2026-03-08T23:01:05.477743+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:06 vm06 bash[20625]: cluster 2026-03-08T23:01:05.477743+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:06 vm06 bash[27746]: cluster 2026-03-08T23:01:05.477743+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:06 vm06 bash[27746]: cluster 2026-03-08T23:01:05.477743+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:07.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:06 vm11 bash[23232]: cluster 2026-03-08T23:01:05.477743+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:07.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:06 vm11 bash[23232]: cluster 2026-03-08T23:01:05.477743+0000 mgr.y (mgr.14150) 166 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:07.181 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:07 vm06 bash[20625]: audit 2026-03-08T23:01:07.439184+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:07 vm06 bash[20625]: audit 2026-03-08T23:01:07.439184+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:07 vm06 bash[20625]: audit 2026-03-08T23:01:07.440662+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:07 vm06 bash[20625]: audit 2026-03-08T23:01:07.440662+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:07 vm06 bash[20625]: audit 2026-03-08T23:01:07.441236+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:07 vm06 bash[20625]: audit 2026-03-08T23:01:07.441236+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:07 vm06 bash[27746]: audit 2026-03-08T23:01:07.439184+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:07 vm06 bash[27746]: audit 2026-03-08T23:01:07.439184+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:07 vm06 bash[27746]: audit 2026-03-08T23:01:07.440662+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:07 vm06 bash[27746]: audit 2026-03-08T23:01:07.440662+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:07 vm06 bash[27746]: audit 2026-03-08T23:01:07.441236+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:07 vm06 bash[27746]: audit 2026-03-08T23:01:07.441236+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:07 vm11 bash[23232]: audit 2026-03-08T23:01:07.439184+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:07 vm11 bash[23232]: audit 2026-03-08T23:01:07.439184+0000 mon.a (mon.0) 500 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:07 vm11 bash[23232]: audit 2026-03-08T23:01:07.440662+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:07 vm11 bash[23232]: audit 2026-03-08T23:01:07.440662+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:07 vm11 bash[23232]: audit 2026-03-08T23:01:07.441236+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:07 vm11 bash[23232]: audit 2026-03-08T23:01:07.441236+0000 mon.a (mon.0) 502 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:08 vm06 bash[20625]: audit 2026-03-08T23:01:07.437754+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:08 vm06 bash[20625]: audit 2026-03-08T23:01:07.437754+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:08 vm06 bash[20625]: cluster 2026-03-08T23:01:07.477980+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:08 vm06 bash[20625]: cluster 2026-03-08T23:01:07.477980+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:08 vm06 bash[27746]: audit 2026-03-08T23:01:07.437754+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:08 vm06 bash[27746]: audit 2026-03-08T23:01:07.437754+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:08 vm06 bash[27746]: cluster 2026-03-08T23:01:07.477980+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:09.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:08 vm06 bash[27746]: cluster 2026-03-08T23:01:07.477980+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:08 vm11 bash[23232]: audit 2026-03-08T23:01:07.437754+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:08 vm11 bash[23232]: audit 2026-03-08T23:01:07.437754+0000 mgr.y (mgr.14150) 167 : audit [DBG] from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdd", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:08 vm11 bash[23232]: cluster 2026-03-08T23:01:07.477980+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:08 vm11 bash[23232]: cluster 2026-03-08T23:01:07.477980+0000 mgr.y (mgr.14150) 168 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:11.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:10 vm06 bash[20625]: cluster 2026-03-08T23:01:09.478268+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-08T23:01:11.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:10 vm06 bash[20625]: cluster 2026-03-08T23:01:09.478268+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-08T23:01:11.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:10 vm06 bash[27746]: cluster 2026-03-08T23:01:09.478268+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-08T23:01:11.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:10 vm06 bash[27746]: cluster 2026-03-08T23:01:09.478268+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-08T23:01:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:10 vm11 bash[23232]: cluster 2026-03-08T23:01:09.478268+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-08T23:01:11.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:10 vm11 bash[23232]: cluster 2026-03-08T23:01:09.478268+0000 mgr.y (mgr.14150) 169 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 44 KiB/s, 0 objects/s recovering 2026-03-08T23:01:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:13 vm11 bash[23232]: cluster 2026-03-08T23:01:11.478472+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:13.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:13 vm11 bash[23232]: cluster 2026-03-08T23:01:11.478472+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:13 vm06 bash[20625]: cluster 2026-03-08T23:01:11.478472+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:13 vm06 bash[20625]: cluster 2026-03-08T23:01:11.478472+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:13.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:13 vm06 bash[27746]: cluster 2026-03-08T23:01:11.478472+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:13.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:13 vm06 bash[27746]: cluster 2026-03-08T23:01:11.478472+0000 mgr.y (mgr.14150) 170 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:14.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.619460+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.111:0/3361134068' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.619460+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.111:0/3361134068' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.620456+0000 mon.a (mon.0) 503 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.620456+0000 mon.a (mon.0) 503 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.675601+0000 mon.a (mon.0) 504 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]': finished 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.675601+0000 mon.a (mon.0) 504 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]': finished 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: cluster 2026-03-08T23:01:13.680485+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: cluster 2026-03-08T23:01:13.680485+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.680769+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:14 vm06 bash[20625]: audit 2026-03-08T23:01:13.680769+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.619460+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.111:0/3361134068' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.619460+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.111:0/3361134068' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.620456+0000 mon.a (mon.0) 503 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.620456+0000 mon.a (mon.0) 503 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.675601+0000 mon.a (mon.0) 504 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]': finished 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.675601+0000 mon.a (mon.0) 504 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]': finished 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: cluster 2026-03-08T23:01:13.680485+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: cluster 2026-03-08T23:01:13.680485+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.680769+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:14.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:14 vm06 bash[27746]: audit 2026-03-08T23:01:13.680769+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.619460+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.111:0/3361134068' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.619460+0000 mon.b (mon.1) 14 : audit [INF] from='client.? 192.168.123.111:0/3361134068' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.620456+0000 mon.a (mon.0) 503 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.620456+0000 mon.a (mon.0) 503 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]: dispatch 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.675601+0000 mon.a (mon.0) 504 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]': finished 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.675601+0000 mon.a (mon.0) 504 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ebf4133c-ae3a-4afe-9e9e-4c894f65f53e"}]': finished 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: cluster 2026-03-08T23:01:13.680485+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: cluster 2026-03-08T23:01:13.680485+0000 mon.a (mon.0) 505 : cluster [DBG] osdmap e35: 6 total, 5 up, 6 in 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.680769+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:14.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:14 vm11 bash[23232]: audit 2026-03-08T23:01:13.680769+0000 mon.a (mon.0) 506 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:15 vm06 bash[20625]: cluster 2026-03-08T23:01:13.478795+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:15 vm06 bash[20625]: cluster 2026-03-08T23:01:13.478795+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:15 vm06 bash[20625]: audit 2026-03-08T23:01:14.706152+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.111:0/2374835280' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:15 vm06 bash[20625]: audit 2026-03-08T23:01:14.706152+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.111:0/2374835280' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:15 vm06 bash[27746]: cluster 2026-03-08T23:01:13.478795+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:15 vm06 bash[27746]: cluster 2026-03-08T23:01:13.478795+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:15 vm06 bash[27746]: audit 2026-03-08T23:01:14.706152+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.111:0/2374835280' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:15.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:15 vm06 bash[27746]: audit 2026-03-08T23:01:14.706152+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.111:0/2374835280' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:15.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:15 vm11 bash[23232]: cluster 2026-03-08T23:01:13.478795+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:15.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:15 vm11 bash[23232]: cluster 2026-03-08T23:01:13.478795+0000 mgr.y (mgr.14150) 171 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:15.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:15 vm11 bash[23232]: audit 2026-03-08T23:01:14.706152+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.111:0/2374835280' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:15.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:15 vm11 bash[23232]: audit 2026-03-08T23:01:14.706152+0000 mon.b (mon.1) 15 : audit [DBG] from='client.? 192.168.123.111:0/2374835280' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:16.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:16 vm06 bash[20625]: cluster 2026-03-08T23:01:15.479109+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:16.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:16 vm06 bash[20625]: cluster 2026-03-08T23:01:15.479109+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:16.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:16 vm06 bash[27746]: cluster 2026-03-08T23:01:15.479109+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:16.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:16 vm06 bash[27746]: cluster 2026-03-08T23:01:15.479109+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:16.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:16 vm11 bash[23232]: cluster 2026-03-08T23:01:15.479109+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:16.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:16 vm11 bash[23232]: cluster 2026-03-08T23:01:15.479109+0000 mgr.y (mgr.14150) 172 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:18.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:18 vm11 bash[23232]: cluster 2026-03-08T23:01:17.479393+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:18.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:18 vm11 bash[23232]: cluster 2026-03-08T23:01:17.479393+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:19.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:18 vm06 bash[20625]: cluster 2026-03-08T23:01:17.479393+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:19.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:18 vm06 bash[20625]: cluster 2026-03-08T23:01:17.479393+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:19.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:18 vm06 bash[27746]: cluster 2026-03-08T23:01:17.479393+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:19.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:18 vm06 bash[27746]: cluster 2026-03-08T23:01:17.479393+0000 mgr.y (mgr.14150) 173 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:20 vm06 bash[20625]: cluster 2026-03-08T23:01:19.479706+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:20 vm06 bash[20625]: cluster 2026-03-08T23:01:19.479706+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:21.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:20 vm06 bash[27746]: cluster 2026-03-08T23:01:19.479706+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:21.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:20 vm06 bash[27746]: cluster 2026-03-08T23:01:19.479706+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:20 vm11 bash[23232]: cluster 2026-03-08T23:01:19.479706+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:20 vm11 bash[23232]: cluster 2026-03-08T23:01:19.479706+0000 mgr.y (mgr.14150) 174 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:22.932 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:22 vm11 bash[23232]: cluster 2026-03-08T23:01:21.479967+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:22.932 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:22 vm11 bash[23232]: cluster 2026-03-08T23:01:21.479967+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:22 vm06 bash[20625]: cluster 2026-03-08T23:01:21.479967+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:23.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:22 vm06 bash[20625]: cluster 2026-03-08T23:01:21.479967+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:22 vm06 bash[27746]: cluster 2026-03-08T23:01:21.479967+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:23.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:22 vm06 bash[27746]: cluster 2026-03-08T23:01:21.479967+0000 mgr.y (mgr.14150) 175 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:23.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:23 vm11 bash[23232]: audit 2026-03-08T23:01:23.196406+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:01:23.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:23 vm11 bash[23232]: audit 2026-03-08T23:01:23.196406+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:01:23.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:23 vm11 bash[23232]: audit 2026-03-08T23:01:23.197021+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:23.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:23 vm11 bash[23232]: audit 2026-03-08T23:01:23.197021+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:23.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:23 vm11 bash[23232]: cephadm 2026-03-08T23:01:23.197493+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-08T23:01:23.706 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:23 vm11 bash[23232]: cephadm 2026-03-08T23:01:23.197493+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-08T23:01:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:23 vm06 bash[20625]: audit 2026-03-08T23:01:23.196406+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:01:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:23 vm06 bash[20625]: audit 2026-03-08T23:01:23.196406+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:01:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:23 vm06 bash[20625]: audit 2026-03-08T23:01:23.197021+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:23 vm06 bash[20625]: audit 2026-03-08T23:01:23.197021+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:23 vm06 bash[20625]: cephadm 2026-03-08T23:01:23.197493+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:23 vm06 bash[20625]: cephadm 2026-03-08T23:01:23.197493+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:23 vm06 bash[27746]: audit 2026-03-08T23:01:23.196406+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:23 vm06 bash[27746]: audit 2026-03-08T23:01:23.196406+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.5"}]: dispatch 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:23 vm06 bash[27746]: audit 2026-03-08T23:01:23.197021+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:23 vm06 bash[27746]: audit 2026-03-08T23:01:23.197021+0000 mon.a (mon.0) 508 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:23 vm06 bash[27746]: cephadm 2026-03-08T23:01:23.197493+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-08T23:01:24.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:23 vm06 bash[27746]: cephadm 2026-03-08T23:01:23.197493+0000 mgr.y (mgr.14150) 176 : cephadm [INF] Deploying daemon osd.5 on vm11 2026-03-08T23:01:24.428 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:24 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:24.428 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:24 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:24.428 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:01:24 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:24.428 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:01:24 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:24.428 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:01:24 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:24.428 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:01:24 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:24 vm06 bash[20625]: cluster 2026-03-08T23:01:23.480279+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:25.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:24 vm06 bash[20625]: cluster 2026-03-08T23:01:23.480279+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:25.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:24 vm06 bash[27746]: cluster 2026-03-08T23:01:23.480279+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:25.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:24 vm06 bash[27746]: cluster 2026-03-08T23:01:23.480279+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:24 vm11 bash[23232]: cluster 2026-03-08T23:01:23.480279+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:24 vm11 bash[23232]: cluster 2026-03-08T23:01:23.480279+0000 mgr.y (mgr.14150) 177 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:26.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:25 vm06 bash[20625]: audit 2026-03-08T23:01:24.804335+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:25 vm06 bash[20625]: audit 2026-03-08T23:01:24.804335+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:25 vm06 bash[20625]: audit 2026-03-08T23:01:24.818421+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:25 vm06 bash[20625]: audit 2026-03-08T23:01:24.818421+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:25 vm06 bash[20625]: audit 2026-03-08T23:01:24.826469+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:25 vm06 bash[20625]: audit 2026-03-08T23:01:24.826469+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:25 vm06 bash[27746]: audit 2026-03-08T23:01:24.804335+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:25 vm06 bash[27746]: audit 2026-03-08T23:01:24.804335+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:25 vm06 bash[27746]: audit 2026-03-08T23:01:24.818421+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:25 vm06 bash[27746]: audit 2026-03-08T23:01:24.818421+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:25 vm06 bash[27746]: audit 2026-03-08T23:01:24.826469+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:25 vm06 bash[27746]: audit 2026-03-08T23:01:24.826469+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:25 vm11 bash[23232]: audit 2026-03-08T23:01:24.804335+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:26.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:25 vm11 bash[23232]: audit 2026-03-08T23:01:24.804335+0000 mon.a (mon.0) 509 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:26.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:25 vm11 bash[23232]: audit 2026-03-08T23:01:24.818421+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:25 vm11 bash[23232]: audit 2026-03-08T23:01:24.818421+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:25 vm11 bash[23232]: audit 2026-03-08T23:01:24.826469+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:26.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:25 vm11 bash[23232]: audit 2026-03-08T23:01:24.826469+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:26 vm06 bash[20625]: cluster 2026-03-08T23:01:25.480622+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:26 vm06 bash[20625]: cluster 2026-03-08T23:01:25.480622+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:26 vm06 bash[27746]: cluster 2026-03-08T23:01:25.480622+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:26 vm06 bash[27746]: cluster 2026-03-08T23:01:25.480622+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:27.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:26 vm11 bash[23232]: cluster 2026-03-08T23:01:25.480622+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:27.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:26 vm11 bash[23232]: cluster 2026-03-08T23:01:25.480622+0000 mgr.y (mgr.14150) 178 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:28 vm06 bash[20625]: cluster 2026-03-08T23:01:27.480861+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:28 vm06 bash[20625]: cluster 2026-03-08T23:01:27.480861+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:28 vm06 bash[20625]: audit 2026-03-08T23:01:28.948163+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:28 vm06 bash[20625]: audit 2026-03-08T23:01:28.948163+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:28 vm06 bash[20625]: audit 2026-03-08T23:01:28.949072+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:28 vm06 bash[20625]: audit 2026-03-08T23:01:28.949072+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:28 vm06 bash[27746]: cluster 2026-03-08T23:01:27.480861+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:28 vm06 bash[27746]: cluster 2026-03-08T23:01:27.480861+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:28 vm06 bash[27746]: audit 2026-03-08T23:01:28.948163+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:28 vm06 bash[27746]: audit 2026-03-08T23:01:28.948163+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:28 vm06 bash[27746]: audit 2026-03-08T23:01:28.949072+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:28 vm06 bash[27746]: audit 2026-03-08T23:01:28.949072+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:28 vm11 bash[23232]: cluster 2026-03-08T23:01:27.480861+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:28 vm11 bash[23232]: cluster 2026-03-08T23:01:27.480861+0000 mgr.y (mgr.14150) 179 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:28 vm11 bash[23232]: audit 2026-03-08T23:01:28.948163+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:28 vm11 bash[23232]: audit 2026-03-08T23:01:28.948163+0000 mon.b (mon.1) 16 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:28 vm11 bash[23232]: audit 2026-03-08T23:01:28.949072+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:29.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:28 vm11 bash[23232]: audit 2026-03-08T23:01:28.949072+0000 mon.a (mon.0) 512 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]: dispatch 2026-03-08T23:01:30.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.992152+0000 mon.a (mon.0) 513 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:01:30.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.992152+0000 mon.a (mon.0) 513 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.995748+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.995748+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: cluster 2026-03-08T23:01:28.996870+0000 mon.a (mon.0) 514 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: cluster 2026-03-08T23:01:28.996870+0000 mon.a (mon.0) 514 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.997410+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.997410+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.997712+0000 mon.a (mon.0) 516 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:30 vm06 bash[20625]: audit 2026-03-08T23:01:28.997712+0000 mon.a (mon.0) 516 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.992152+0000 mon.a (mon.0) 513 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.992152+0000 mon.a (mon.0) 513 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.995748+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.995748+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: cluster 2026-03-08T23:01:28.996870+0000 mon.a (mon.0) 514 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: cluster 2026-03-08T23:01:28.996870+0000 mon.a (mon.0) 514 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.997410+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.997410+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.997712+0000 mon.a (mon.0) 516 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:30 vm06 bash[27746]: audit 2026-03-08T23:01:28.997712+0000 mon.a (mon.0) 516 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.992152+0000 mon.a (mon.0) 513 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.992152+0000 mon.a (mon.0) 513 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["5"]}]': finished 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.995748+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.995748+0000 mon.b (mon.1) 17 : audit [INF] from='osd.5 v2:192.168.123.111:6804/3102108212' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: cluster 2026-03-08T23:01:28.996870+0000 mon.a (mon.0) 514 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: cluster 2026-03-08T23:01:28.996870+0000 mon.a (mon.0) 514 : cluster [DBG] osdmap e36: 6 total, 5 up, 6 in 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.997410+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.997410+0000 mon.a (mon.0) 515 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.997712+0000 mon.a (mon.0) 516 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:30.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:30 vm11 bash[23232]: audit 2026-03-08T23:01:28.997712+0000 mon.a (mon.0) 516 : audit [INF] from='osd.5 ' entity='osd.5' cmd=[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:29.481110+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:29.481110+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:29.999028+0000 mon.a (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:29.999028+0000 mon.a (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:30.002180+0000 mon.a (mon.0) 518 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:30.002180+0000 mon.a (mon.0) 518 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:30.003231+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:30.003231+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:30.007057+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:30.007057+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:30.955771+0000 mon.a (mon.0) 521 : cluster [INF] osd.5 v2:192.168.123.111:6804/3102108212 boot 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:30.955771+0000 mon.a (mon.0) 521 : cluster [INF] osd.5 v2:192.168.123.111:6804/3102108212 boot 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:30.955791+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: cluster 2026-03-08T23:01:30.955791+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:30.956953+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:31 vm06 bash[20625]: audit 2026-03-08T23:01:30.956953+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:29.481110+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:29.481110+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:29.999028+0000 mon.a (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:29.999028+0000 mon.a (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:30.002180+0000 mon.a (mon.0) 518 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:30.002180+0000 mon.a (mon.0) 518 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:30.003231+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:30.003231+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:30.007057+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:30.007057+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:30.955771+0000 mon.a (mon.0) 521 : cluster [INF] osd.5 v2:192.168.123.111:6804/3102108212 boot 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:30.955771+0000 mon.a (mon.0) 521 : cluster [INF] osd.5 v2:192.168.123.111:6804/3102108212 boot 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:30.955791+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: cluster 2026-03-08T23:01:30.955791+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:30.956953+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:31 vm06 bash[27746]: audit 2026-03-08T23:01:30.956953+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:29.481110+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:29.481110+0000 mgr.y (mgr.14150) 180 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 134 MiB used, 100 GiB / 100 GiB avail 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:29.999028+0000 mon.a (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:29.999028+0000 mon.a (mon.0) 517 : audit [INF] from='osd.5 ' entity='osd.5' cmd='[{"prefix": "osd crush create-or-move", "id": 5, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:30.002180+0000 mon.a (mon.0) 518 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:30.002180+0000 mon.a (mon.0) 518 : cluster [DBG] osdmap e37: 6 total, 5 up, 6 in 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:30.003231+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:30.003231+0000 mon.a (mon.0) 519 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:30.007057+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:30.007057+0000 mon.a (mon.0) 520 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:30.955771+0000 mon.a (mon.0) 521 : cluster [INF] osd.5 v2:192.168.123.111:6804/3102108212 boot 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:30.955771+0000 mon.a (mon.0) 521 : cluster [INF] osd.5 v2:192.168.123.111:6804/3102108212 boot 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:30.955791+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: cluster 2026-03-08T23:01:30.955791+0000 mon.a (mon.0) 522 : cluster [DBG] osdmap e38: 6 total, 6 up, 6 in 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:30.956953+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:31 vm11 bash[23232]: audit 2026-03-08T23:01:30.956953+0000 mon.a (mon.0) 523 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:01:32.558 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 5 on host 'vm11' 2026-03-08T23:01:32.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: cluster 2026-03-08T23:01:29.981465+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:01:32.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: cluster 2026-03-08T23:01:29.981465+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:01:32.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: cluster 2026-03-08T23:01:29.981520+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:01:32.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: cluster 2026-03-08T23:01:29.981520+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:01:32.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.096349+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.096349+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.102259+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.102259+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.102971+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.102971+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.103508+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.103508+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.106739+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: audit 2026-03-08T23:01:31.106739+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: cluster 2026-03-08T23:01:32.054089+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-08T23:01:32.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:32 vm11 bash[23232]: cluster 2026-03-08T23:01:32.054089+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-08T23:01:32.748 DEBUG:teuthology.orchestra.run.vm11:osd.5> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.5.service 2026-03-08T23:01:32.749 INFO:tasks.cephadm:Deploying osd.6 on vm11 with /dev/vdc... 2026-03-08T23:01:32.749 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vdc 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: cluster 2026-03-08T23:01:29.981465+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: cluster 2026-03-08T23:01:29.981465+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: cluster 2026-03-08T23:01:29.981520+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: cluster 2026-03-08T23:01:29.981520+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.096349+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.096349+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.102259+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.102259+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.102971+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.102971+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.103508+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.103508+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.106739+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: audit 2026-03-08T23:01:31.106739+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: cluster 2026-03-08T23:01:32.054089+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:32 vm06 bash[20625]: cluster 2026-03-08T23:01:32.054089+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-08T23:01:32.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: cluster 2026-03-08T23:01:29.981465+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: cluster 2026-03-08T23:01:29.981465+0000 osd.5 (osd.5) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: cluster 2026-03-08T23:01:29.981520+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: cluster 2026-03-08T23:01:29.981520+0000 osd.5 (osd.5) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.096349+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.096349+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.102259+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.102259+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.102971+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.102971+0000 mon.a (mon.0) 526 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.103508+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.103508+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.106739+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: audit 2026-03-08T23:01:31.106739+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: cluster 2026-03-08T23:01:32.054089+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-08T23:01:32.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:32 vm06 bash[27746]: cluster 2026-03-08T23:01:32.054089+0000 mon.a (mon.0) 529 : cluster [DBG] osdmap e39: 6 total, 6 up, 6 in 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: cluster 2026-03-08T23:01:31.481411+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v158: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: cluster 2026-03-08T23:01:31.481411+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v158: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: audit 2026-03-08T23:01:32.468830+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: audit 2026-03-08T23:01:32.468830+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: audit 2026-03-08T23:01:32.494378+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: audit 2026-03-08T23:01:32.494378+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: audit 2026-03-08T23:01:32.553833+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: audit 2026-03-08T23:01:32.553833+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: cluster 2026-03-08T23:01:33.018701+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:33 vm06 bash[20625]: cluster 2026-03-08T23:01:33.018701+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-08T23:01:33.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: cluster 2026-03-08T23:01:31.481411+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v158: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: cluster 2026-03-08T23:01:31.481411+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v158: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: audit 2026-03-08T23:01:32.468830+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: audit 2026-03-08T23:01:32.468830+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: audit 2026-03-08T23:01:32.494378+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: audit 2026-03-08T23:01:32.494378+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: audit 2026-03-08T23:01:32.553833+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: audit 2026-03-08T23:01:32.553833+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: cluster 2026-03-08T23:01:33.018701+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-08T23:01:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:33 vm06 bash[27746]: cluster 2026-03-08T23:01:33.018701+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: cluster 2026-03-08T23:01:31.481411+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v158: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: cluster 2026-03-08T23:01:31.481411+0000 mgr.y (mgr.14150) 181 : cluster [DBG] pgmap v158: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: audit 2026-03-08T23:01:32.468830+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: audit 2026-03-08T23:01:32.468830+0000 mon.a (mon.0) 530 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: audit 2026-03-08T23:01:32.494378+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: audit 2026-03-08T23:01:32.494378+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: audit 2026-03-08T23:01:32.553833+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: audit 2026-03-08T23:01:32.553833+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: cluster 2026-03-08T23:01:33.018701+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-08T23:01:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:33 vm11 bash[23232]: cluster 2026-03-08T23:01:33.018701+0000 mon.a (mon.0) 533 : cluster [DBG] osdmap e40: 6 total, 6 up, 6 in 2026-03-08T23:01:34.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:34 vm06 bash[20625]: cluster 2026-03-08T23:01:33.481680+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:34.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:34 vm06 bash[20625]: cluster 2026-03-08T23:01:33.481680+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:34.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:34 vm06 bash[27746]: cluster 2026-03-08T23:01:33.481680+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:34.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:34 vm06 bash[27746]: cluster 2026-03-08T23:01:33.481680+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:34.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:34 vm11 bash[23232]: cluster 2026-03-08T23:01:33.481680+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:34.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:34 vm11 bash[23232]: cluster 2026-03-08T23:01:33.481680+0000 mgr.y (mgr.14150) 182 : cluster [DBG] pgmap v161: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:36.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:36 vm11 bash[23232]: cluster 2026-03-08T23:01:35.481944+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v162: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:36.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:36 vm11 bash[23232]: cluster 2026-03-08T23:01:35.481944+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v162: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:37.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:36 vm06 bash[20625]: cluster 2026-03-08T23:01:35.481944+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v162: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:37.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:36 vm06 bash[20625]: cluster 2026-03-08T23:01:35.481944+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v162: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:37.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:36 vm06 bash[27746]: cluster 2026-03-08T23:01:35.481944+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v162: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:37.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:36 vm06 bash[27746]: cluster 2026-03-08T23:01:35.481944+0000 mgr.y (mgr.14150) 183 : cluster [DBG] pgmap v162: 1 pgs: 1 unknown; 449 KiB data, 160 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:37.363 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:01:38.350 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T23:01:38.361 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm11:/dev/vdc 2026-03-08T23:01:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:38 vm11 bash[23232]: cluster 2026-03-08T23:01:37.482232+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-08T23:01:38.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:38 vm11 bash[23232]: cluster 2026-03-08T23:01:37.482232+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-08T23:01:39.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:38 vm06 bash[20625]: cluster 2026-03-08T23:01:37.482232+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-08T23:01:39.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:38 vm06 bash[20625]: cluster 2026-03-08T23:01:37.482232+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-08T23:01:39.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:38 vm06 bash[27746]: cluster 2026-03-08T23:01:37.482232+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-08T23:01:39.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:38 vm06 bash[27746]: cluster 2026-03-08T23:01:37.482232+0000 mgr.y (mgr.14150) 184 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 69 KiB/s, 0 objects/s recovering 2026-03-08T23:01:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: cephadm 2026-03-08T23:01:39.158086+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: cephadm 2026-03-08T23:01:39.158086+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.164660+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.164660+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.169623+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.169623+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.170871+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.170871+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.171311+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.171311+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: cephadm 2026-03-08T23:01:39.171651+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: cephadm 2026-03-08T23:01:39.171651+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: cephadm 2026-03-08T23:01:39.172379+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: cephadm 2026-03-08T23:01:39.172379+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.172806+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.172806+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.173352+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.173352+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.178350+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:40 vm06 bash[20625]: audit 2026-03-08T23:01:39.178350+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: cephadm 2026-03-08T23:01:39.158086+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: cephadm 2026-03-08T23:01:39.158086+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.164660+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.164660+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.169623+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.169623+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.170871+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.170871+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.171311+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.171311+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: cephadm 2026-03-08T23:01:39.171651+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: cephadm 2026-03-08T23:01:39.171651+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: cephadm 2026-03-08T23:01:39.172379+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:01:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: cephadm 2026-03-08T23:01:39.172379+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:01:40.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.172806+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:40.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.172806+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:40.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.173352+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:40.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.173352+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:40.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.178350+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:40 vm06 bash[27746]: audit 2026-03-08T23:01:39.178350+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: cephadm 2026-03-08T23:01:39.158086+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: cephadm 2026-03-08T23:01:39.158086+0000 mgr.y (mgr.14150) 185 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.164660+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.164660+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.169623+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.169623+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.170871+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.170871+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.171311+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.171311+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: cephadm 2026-03-08T23:01:39.171651+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: cephadm 2026-03-08T23:01:39.171651+0000 mgr.y (mgr.14150) 186 : cephadm [INF] Adjusting osd_memory_target on vm11 to 227.8M 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: cephadm 2026-03-08T23:01:39.172379+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: cephadm 2026-03-08T23:01:39.172379+0000 mgr.y (mgr.14150) 187 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 238960844: error parsing value: Value '238960844' is below minimum 939524096 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.172806+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.172806+0000 mon.a (mon.0) 538 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.173352+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.173352+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.178350+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:40.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:40 vm11 bash[23232]: audit 2026-03-08T23:01:39.178350+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:41 vm06 bash[20625]: cluster 2026-03-08T23:01:39.482500+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:41 vm06 bash[20625]: cluster 2026-03-08T23:01:39.482500+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:41 vm06 bash[27746]: cluster 2026-03-08T23:01:39.482500+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:41 vm06 bash[27746]: cluster 2026-03-08T23:01:39.482500+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:41.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:41 vm11 bash[23232]: cluster 2026-03-08T23:01:39.482500+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:41.560 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:41 vm11 bash[23232]: cluster 2026-03-08T23:01:39.482500+0000 mgr.y (mgr.14150) 188 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 56 KiB/s, 0 objects/s recovering 2026-03-08T23:01:43.002 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:01:43.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:43 vm11 bash[23232]: cluster 2026-03-08T23:01:41.482777+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:43.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:43 vm11 bash[23232]: cluster 2026-03-08T23:01:41.482777+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:43 vm06 bash[20625]: cluster 2026-03-08T23:01:41.482777+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:43 vm06 bash[20625]: cluster 2026-03-08T23:01:41.482777+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:43 vm06 bash[27746]: cluster 2026-03-08T23:01:41.482777+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:43 vm06 bash[27746]: cluster 2026-03-08T23:01:41.482777+0000 mgr.y (mgr.14150) 189 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 48 KiB/s, 0 objects/s recovering 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:44 vm06 bash[20625]: audit 2026-03-08T23:01:43.341592+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:44 vm06 bash[20625]: audit 2026-03-08T23:01:43.341592+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:44 vm06 bash[20625]: audit 2026-03-08T23:01:43.342996+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:44 vm06 bash[20625]: audit 2026-03-08T23:01:43.342996+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:44 vm06 bash[20625]: audit 2026-03-08T23:01:43.343481+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:44 vm06 bash[20625]: audit 2026-03-08T23:01:43.343481+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:44 vm06 bash[27746]: audit 2026-03-08T23:01:43.341592+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:44 vm06 bash[27746]: audit 2026-03-08T23:01:43.341592+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:44 vm06 bash[27746]: audit 2026-03-08T23:01:43.342996+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:44 vm06 bash[27746]: audit 2026-03-08T23:01:43.342996+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:44 vm06 bash[27746]: audit 2026-03-08T23:01:43.343481+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:45.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:44 vm06 bash[27746]: audit 2026-03-08T23:01:43.343481+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:44 vm11 bash[23232]: audit 2026-03-08T23:01:43.341592+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:44 vm11 bash[23232]: audit 2026-03-08T23:01:43.341592+0000 mon.a (mon.0) 541 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:01:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:44 vm11 bash[23232]: audit 2026-03-08T23:01:43.342996+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:44 vm11 bash[23232]: audit 2026-03-08T23:01:43.342996+0000 mon.a (mon.0) 542 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:01:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:44 vm11 bash[23232]: audit 2026-03-08T23:01:43.343481+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:45.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:44 vm11 bash[23232]: audit 2026-03-08T23:01:43.343481+0000 mon.a (mon.0) 543 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:46.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:45 vm06 bash[20625]: audit 2026-03-08T23:01:43.340097+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:46.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:45 vm06 bash[20625]: audit 2026-03-08T23:01:43.340097+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:46.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:45 vm06 bash[20625]: cluster 2026-03-08T23:01:43.483064+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:01:46.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:45 vm06 bash[20625]: cluster 2026-03-08T23:01:43.483064+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:01:46.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:45 vm06 bash[27746]: audit 2026-03-08T23:01:43.340097+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:46.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:45 vm06 bash[27746]: audit 2026-03-08T23:01:43.340097+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:46.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:45 vm06 bash[27746]: cluster 2026-03-08T23:01:43.483064+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:01:46.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:45 vm06 bash[27746]: cluster 2026-03-08T23:01:43.483064+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:01:46.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:45 vm11 bash[23232]: audit 2026-03-08T23:01:43.340097+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:46.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:45 vm11 bash[23232]: audit 2026-03-08T23:01:43.340097+0000 mgr.y (mgr.14150) 190 : audit [DBG] from='client.24254 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdc", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:01:46.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:45 vm11 bash[23232]: cluster 2026-03-08T23:01:43.483064+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:01:46.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:45 vm11 bash[23232]: cluster 2026-03-08T23:01:43.483064+0000 mgr.y (mgr.14150) 191 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:01:47.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:46 vm06 bash[20625]: cluster 2026-03-08T23:01:45.483322+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:47.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:46 vm06 bash[20625]: cluster 2026-03-08T23:01:45.483322+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:47.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:46 vm06 bash[27746]: cluster 2026-03-08T23:01:45.483322+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:47.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:46 vm06 bash[27746]: cluster 2026-03-08T23:01:45.483322+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:47.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:46 vm11 bash[23232]: cluster 2026-03-08T23:01:45.483322+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:47.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:46 vm11 bash[23232]: cluster 2026-03-08T23:01:45.483322+0000 mgr.y (mgr.14150) 192 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:48 vm11 bash[23232]: cluster 2026-03-08T23:01:47.483604+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:49.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:48 vm11 bash[23232]: cluster 2026-03-08T23:01:47.483604+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:49.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:48 vm06 bash[20625]: cluster 2026-03-08T23:01:47.483604+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:49.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:48 vm06 bash[20625]: cluster 2026-03-08T23:01:47.483604+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:49.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:48 vm06 bash[27746]: cluster 2026-03-08T23:01:47.483604+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:49.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:48 vm06 bash[27746]: cluster 2026-03-08T23:01:47.483604+0000 mgr.y (mgr.14150) 193 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.797335+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.111:0/3373651304' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.797335+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.111:0/3373651304' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.798337+0000 mon.a (mon.0) 544 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.798337+0000 mon.a (mon.0) 544 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.802561+0000 mon.a (mon.0) 545 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]': finished 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.802561+0000 mon.a (mon.0) 545 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]': finished 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: cluster 2026-03-08T23:01:48.807830+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: cluster 2026-03-08T23:01:48.807830+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.808069+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:48.808069+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:49.436401+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.111:0/2607878168' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:50.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:49 vm11 bash[23232]: audit 2026-03-08T23:01:49.436401+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.111:0/2607878168' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.797335+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.111:0/3373651304' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.797335+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.111:0/3373651304' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.798337+0000 mon.a (mon.0) 544 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.798337+0000 mon.a (mon.0) 544 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.802561+0000 mon.a (mon.0) 545 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]': finished 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.802561+0000 mon.a (mon.0) 545 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]': finished 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: cluster 2026-03-08T23:01:48.807830+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: cluster 2026-03-08T23:01:48.807830+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.808069+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:48.808069+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:01:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:49.436401+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.111:0/2607878168' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:49 vm06 bash[20625]: audit 2026-03-08T23:01:49.436401+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.111:0/2607878168' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.797335+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.111:0/3373651304' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.797335+0000 mon.b (mon.1) 18 : audit [INF] from='client.? 192.168.123.111:0/3373651304' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.798337+0000 mon.a (mon.0) 544 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.798337+0000 mon.a (mon.0) 544 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.802561+0000 mon.a (mon.0) 545 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]': finished 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.802561+0000 mon.a (mon.0) 545 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1359b0d9-00db-474d-93f0-8246b9a8fa82"}]': finished 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: cluster 2026-03-08T23:01:48.807830+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: cluster 2026-03-08T23:01:48.807830+0000 mon.a (mon.0) 546 : cluster [DBG] osdmap e41: 7 total, 6 up, 7 in 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.808069+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:48.808069+0000 mon.a (mon.0) 547 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:49.436401+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.111:0/2607878168' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:49 vm06 bash[27746]: audit 2026-03-08T23:01:49.436401+0000 mon.b (mon.1) 19 : audit [DBG] from='client.? 192.168.123.111:0/2607878168' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:01:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:50 vm11 bash[23232]: cluster 2026-03-08T23:01:49.483903+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:50 vm11 bash[23232]: cluster 2026-03-08T23:01:49.483903+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:51.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:50 vm06 bash[20625]: cluster 2026-03-08T23:01:49.483903+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:51.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:50 vm06 bash[20625]: cluster 2026-03-08T23:01:49.483903+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:51.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:50 vm06 bash[27746]: cluster 2026-03-08T23:01:49.483903+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:51.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:50 vm06 bash[27746]: cluster 2026-03-08T23:01:49.483903+0000 mgr.y (mgr.14150) 194 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:52 vm06 bash[20625]: cluster 2026-03-08T23:01:51.484191+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:52 vm06 bash[20625]: cluster 2026-03-08T23:01:51.484191+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:53.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:52 vm06 bash[27746]: cluster 2026-03-08T23:01:51.484191+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:53.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:52 vm06 bash[27746]: cluster 2026-03-08T23:01:51.484191+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:52 vm11 bash[23232]: cluster 2026-03-08T23:01:51.484191+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:52 vm11 bash[23232]: cluster 2026-03-08T23:01:51.484191+0000 mgr.y (mgr.14150) 195 : cluster [DBG] pgmap v171: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:54 vm06 bash[20625]: cluster 2026-03-08T23:01:53.484523+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:54 vm06 bash[20625]: cluster 2026-03-08T23:01:53.484523+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:55.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:54 vm06 bash[27746]: cluster 2026-03-08T23:01:53.484523+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:55.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:54 vm06 bash[27746]: cluster 2026-03-08T23:01:53.484523+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:54 vm11 bash[23232]: cluster 2026-03-08T23:01:53.484523+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:55.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:54 vm11 bash[23232]: cluster 2026-03-08T23:01:53.484523+0000 mgr.y (mgr.14150) 196 : cluster [DBG] pgmap v172: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:57.270 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:56 vm11 bash[23232]: cluster 2026-03-08T23:01:55.484812+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:57.270 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:56 vm11 bash[23232]: cluster 2026-03-08T23:01:55.484812+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:57.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:56 vm06 bash[20625]: cluster 2026-03-08T23:01:55.484812+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:57.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:56 vm06 bash[20625]: cluster 2026-03-08T23:01:55.484812+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:57.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:56 vm06 bash[27746]: cluster 2026-03-08T23:01:55.484812+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:57.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:56 vm06 bash[27746]: cluster 2026-03-08T23:01:55.484812+0000 mgr.y (mgr.14150) 197 : cluster [DBG] pgmap v173: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:58.269 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:57 vm11 bash[23232]: audit 2026-03-08T23:01:57.691450+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:01:58.269 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:57 vm11 bash[23232]: audit 2026-03-08T23:01:57.691450+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:01:58.269 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:57 vm11 bash[23232]: audit 2026-03-08T23:01:57.692056+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:58.269 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:57 vm11 bash[23232]: audit 2026-03-08T23:01:57.692056+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:58.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:57 vm06 bash[20625]: audit 2026-03-08T23:01:57.691450+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:01:58.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:57 vm06 bash[20625]: audit 2026-03-08T23:01:57.691450+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:01:58.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:57 vm06 bash[20625]: audit 2026-03-08T23:01:57.692056+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:58.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:57 vm06 bash[20625]: audit 2026-03-08T23:01:57.692056+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:58.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:57 vm06 bash[27746]: audit 2026-03-08T23:01:57.691450+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:01:58.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:57 vm06 bash[27746]: audit 2026-03-08T23:01:57.691450+0000 mon.a (mon.0) 548 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.6"}]: dispatch 2026-03-08T23:01:58.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:57 vm06 bash[27746]: audit 2026-03-08T23:01:57.692056+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:58.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:57 vm06 bash[27746]: audit 2026-03-08T23:01:57.692056+0000 mon.a (mon.0) 549 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:01:58.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.557 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.892 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.892 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.892 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:58.892 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:01:58 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:01:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: cluster 2026-03-08T23:01:57.485064+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: cluster 2026-03-08T23:01:57.485064+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: cephadm 2026-03-08T23:01:57.692474+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: cephadm 2026-03-08T23:01:57.692474+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: audit 2026-03-08T23:01:58.816095+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: audit 2026-03-08T23:01:58.816095+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: audit 2026-03-08T23:01:58.822410+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: audit 2026-03-08T23:01:58.822410+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: audit 2026-03-08T23:01:58.834080+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:01:58 vm06 bash[20625]: audit 2026-03-08T23:01:58.834080+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: cluster 2026-03-08T23:01:57.485064+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: cluster 2026-03-08T23:01:57.485064+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: cephadm 2026-03-08T23:01:57.692474+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: cephadm 2026-03-08T23:01:57.692474+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: audit 2026-03-08T23:01:58.816095+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: audit 2026-03-08T23:01:58.816095+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: audit 2026-03-08T23:01:58.822410+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: audit 2026-03-08T23:01:58.822410+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: audit 2026-03-08T23:01:58.834080+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:01:58 vm06 bash[27746]: audit 2026-03-08T23:01:58.834080+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: cluster 2026-03-08T23:01:57.485064+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: cluster 2026-03-08T23:01:57.485064+0000 mgr.y (mgr.14150) 198 : cluster [DBG] pgmap v174: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: cephadm 2026-03-08T23:01:57.692474+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: cephadm 2026-03-08T23:01:57.692474+0000 mgr.y (mgr.14150) 199 : cephadm [INF] Deploying daemon osd.6 on vm11 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: audit 2026-03-08T23:01:58.816095+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: audit 2026-03-08T23:01:58.816095+0000 mon.a (mon.0) 550 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: audit 2026-03-08T23:01:58.822410+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: audit 2026-03-08T23:01:58.822410+0000 mon.a (mon.0) 551 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: audit 2026-03-08T23:01:58.834080+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:01:59.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:01:58 vm11 bash[23232]: audit 2026-03-08T23:01:58.834080+0000 mon.a (mon.0) 552 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:00 vm06 bash[20625]: cluster 2026-03-08T23:01:59.485441+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:00 vm06 bash[20625]: cluster 2026-03-08T23:01:59.485441+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:00 vm06 bash[27746]: cluster 2026-03-08T23:01:59.485441+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:00 vm06 bash[27746]: cluster 2026-03-08T23:01:59.485441+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:01.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:00 vm11 bash[23232]: cluster 2026-03-08T23:01:59.485441+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:01.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:00 vm11 bash[23232]: cluster 2026-03-08T23:01:59.485441+0000 mgr.y (mgr.14150) 200 : cluster [DBG] pgmap v175: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:03 vm06 bash[20625]: cluster 2026-03-08T23:02:01.485827+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:03 vm06 bash[20625]: cluster 2026-03-08T23:02:01.485827+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:03 vm06 bash[20625]: audit 2026-03-08T23:02:02.267474+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:03 vm06 bash[20625]: audit 2026-03-08T23:02:02.267474+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:03 vm06 bash[20625]: audit 2026-03-08T23:02:02.268560+0000 mon.a (mon.0) 553 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:03 vm06 bash[20625]: audit 2026-03-08T23:02:02.268560+0000 mon.a (mon.0) 553 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:03 vm06 bash[27746]: cluster 2026-03-08T23:02:01.485827+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:03 vm06 bash[27746]: cluster 2026-03-08T23:02:01.485827+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:03 vm06 bash[27746]: audit 2026-03-08T23:02:02.267474+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:03 vm06 bash[27746]: audit 2026-03-08T23:02:02.267474+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:03 vm06 bash[27746]: audit 2026-03-08T23:02:02.268560+0000 mon.a (mon.0) 553 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:03 vm06 bash[27746]: audit 2026-03-08T23:02:02.268560+0000 mon.a (mon.0) 553 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:03 vm11 bash[23232]: cluster 2026-03-08T23:02:01.485827+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:03 vm11 bash[23232]: cluster 2026-03-08T23:02:01.485827+0000 mgr.y (mgr.14150) 201 : cluster [DBG] pgmap v176: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:03.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:03 vm11 bash[23232]: audit 2026-03-08T23:02:02.267474+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:03 vm11 bash[23232]: audit 2026-03-08T23:02:02.267474+0000 mon.b (mon.1) 20 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:03 vm11 bash[23232]: audit 2026-03-08T23:02:02.268560+0000 mon.a (mon.0) 553 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:03.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:03 vm11 bash[23232]: audit 2026-03-08T23:02:02.268560+0000 mon.a (mon.0) 553 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]: dispatch 2026-03-08T23:02:04.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.219091+0000 mon.a (mon.0) 554 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:02:04.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.219091+0000 mon.a (mon.0) 554 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:02:04.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.224586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.224586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: cluster 2026-03-08T23:02:03.226219+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-08T23:02:04.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: cluster 2026-03-08T23:02:03.226219+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.226585+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.226585+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.226669+0000 mon.a (mon.0) 557 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:04 vm06 bash[20625]: audit 2026-03-08T23:02:03.226669+0000 mon.a (mon.0) 557 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.219091+0000 mon.a (mon.0) 554 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.219091+0000 mon.a (mon.0) 554 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.224586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.224586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: cluster 2026-03-08T23:02:03.226219+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: cluster 2026-03-08T23:02:03.226219+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.226585+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.226585+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.226669+0000 mon.a (mon.0) 557 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:04 vm06 bash[27746]: audit 2026-03-08T23:02:03.226669+0000 mon.a (mon.0) 557 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.219091+0000 mon.a (mon.0) 554 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.219091+0000 mon.a (mon.0) 554 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["6"]}]': finished 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.224586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.224586+0000 mon.b (mon.1) 21 : audit [INF] from='osd.6 v2:192.168.123.111:6808/3646507391' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: cluster 2026-03-08T23:02:03.226219+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: cluster 2026-03-08T23:02:03.226219+0000 mon.a (mon.0) 555 : cluster [DBG] osdmap e42: 7 total, 6 up, 7 in 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.226585+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.226585+0000 mon.a (mon.0) 556 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.226669+0000 mon.a (mon.0) 557 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:04.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:04 vm11 bash[23232]: audit 2026-03-08T23:02:03.226669+0000 mon.a (mon.0) 557 : audit [INF] from='osd.6 ' entity='osd.6' cmd=[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:03.486092+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:03.486092+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:04.232453+0000 mon.a (mon.0) 558 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:04.232453+0000 mon.a (mon.0) 558 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:04.238177+0000 mon.a (mon.0) 559 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:04.238177+0000 mon.a (mon.0) 559 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:04.241840+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:04.241840+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:05.240982+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:05.240982+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:05.251026+0000 mon.a (mon.0) 562 : cluster [INF] osd.6 v2:192.168.123.111:6808/3646507391 boot 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:05.251026+0000 mon.a (mon.0) 562 : cluster [INF] osd.6 v2:192.168.123.111:6808/3646507391 boot 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:05.251154+0000 mon.a (mon.0) 563 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: cluster 2026-03-08T23:02:05.251154+0000 mon.a (mon.0) 563 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:05.253924+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:05 vm06 bash[20625]: audit 2026-03-08T23:02:05.253924+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:03.486092+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:03.486092+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:04.232453+0000 mon.a (mon.0) 558 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:04.232453+0000 mon.a (mon.0) 558 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:04.238177+0000 mon.a (mon.0) 559 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:04.238177+0000 mon.a (mon.0) 559 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:04.241840+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:04.241840+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:05.240982+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:05.240982+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:05.251026+0000 mon.a (mon.0) 562 : cluster [INF] osd.6 v2:192.168.123.111:6808/3646507391 boot 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:05.251026+0000 mon.a (mon.0) 562 : cluster [INF] osd.6 v2:192.168.123.111:6808/3646507391 boot 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:05.251154+0000 mon.a (mon.0) 563 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: cluster 2026-03-08T23:02:05.251154+0000 mon.a (mon.0) 563 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:05.253924+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:05 vm06 bash[27746]: audit 2026-03-08T23:02:05.253924+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:03.486092+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:03.486092+0000 mgr.y (mgr.14150) 202 : cluster [DBG] pgmap v178: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:04.232453+0000 mon.a (mon.0) 558 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:04.232453+0000 mon.a (mon.0) 558 : audit [INF] from='osd.6 ' entity='osd.6' cmd='[{"prefix": "osd crush create-or-move", "id": 6, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:04.238177+0000 mon.a (mon.0) 559 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:04.238177+0000 mon.a (mon.0) 559 : cluster [DBG] osdmap e43: 7 total, 6 up, 7 in 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:04.241840+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:04.241840+0000 mon.a (mon.0) 560 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:05.240982+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:05.240982+0000 mon.a (mon.0) 561 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:05.251026+0000 mon.a (mon.0) 562 : cluster [INF] osd.6 v2:192.168.123.111:6808/3646507391 boot 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:05.251026+0000 mon.a (mon.0) 562 : cluster [INF] osd.6 v2:192.168.123.111:6808/3646507391 boot 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:05.251154+0000 mon.a (mon.0) 563 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: cluster 2026-03-08T23:02:05.251154+0000 mon.a (mon.0) 563 : cluster [DBG] osdmap e44: 7 total, 7 up, 7 in 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:05.253924+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:05.565 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:05 vm11 bash[23232]: audit 2026-03-08T23:02:05.253924+0000 mon.a (mon.0) 564 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:02:06.773 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 6 on host 'vm11' 2026-03-08T23:02:06.864 DEBUG:teuthology.orchestra.run.vm11:osd.6> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.6.service 2026-03-08T23:02:06.865 INFO:tasks.cephadm:Deploying osd.7 on vm11 with /dev/vdb... 2026-03-08T23:02:06.865 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- lvm zap /dev/vdb 2026-03-08T23:02:06.869 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: cluster 2026-03-08T23:02:03.219945+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:06.869 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: cluster 2026-03-08T23:02:03.219945+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:06.869 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: cluster 2026-03-08T23:02:03.219991+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:06.869 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: cluster 2026-03-08T23:02:03.219991+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.263414+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.263414+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.303387+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.303387+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.715493+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.715493+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.716122+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.716122+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.723263+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: audit 2026-03-08T23:02:05.723263+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: cluster 2026-03-08T23:02:05.963471+0000 mon.a (mon.0) 570 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-08T23:02:06.870 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:06 vm11 bash[23232]: cluster 2026-03-08T23:02:05.963471+0000 mon.a (mon.0) 570 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-08T23:02:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: cluster 2026-03-08T23:02:03.219945+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: cluster 2026-03-08T23:02:03.219945+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: cluster 2026-03-08T23:02:03.219991+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: cluster 2026-03-08T23:02:03.219991+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.263414+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.263414+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.303387+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.303387+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.715493+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.715493+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.716122+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.716122+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.723263+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: audit 2026-03-08T23:02:05.723263+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: cluster 2026-03-08T23:02:05.963471+0000 mon.a (mon.0) 570 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:06 vm06 bash[20625]: cluster 2026-03-08T23:02:05.963471+0000 mon.a (mon.0) 570 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: cluster 2026-03-08T23:02:03.219945+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: cluster 2026-03-08T23:02:03.219945+0000 osd.6 (osd.6) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: cluster 2026-03-08T23:02:03.219991+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: cluster 2026-03-08T23:02:03.219991+0000 osd.6 (osd.6) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.263414+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.263414+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.303387+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.303387+0000 mon.a (mon.0) 566 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.715493+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.715493+0000 mon.a (mon.0) 567 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.716122+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.716122+0000 mon.a (mon.0) 568 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.723263+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: audit 2026-03-08T23:02:05.723263+0000 mon.a (mon.0) 569 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: cluster 2026-03-08T23:02:05.963471+0000 mon.a (mon.0) 570 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-08T23:02:07.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:06 vm06 bash[27746]: cluster 2026-03-08T23:02:05.963471+0000 mon.a (mon.0) 570 : cluster [DBG] osdmap e45: 7 total, 7 up, 7 in 2026-03-08T23:02:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: cluster 2026-03-08T23:02:05.486501+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: cluster 2026-03-08T23:02:05.486501+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: audit 2026-03-08T23:02:06.760387+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: audit 2026-03-08T23:02:06.760387+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: audit 2026-03-08T23:02:06.766024+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: audit 2026-03-08T23:02:06.766024+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: audit 2026-03-08T23:02:06.770701+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: audit 2026-03-08T23:02:06.770701+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: cluster 2026-03-08T23:02:06.967671+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:07 vm06 bash[20625]: cluster 2026-03-08T23:02:06.967671+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: cluster 2026-03-08T23:02:05.486501+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: cluster 2026-03-08T23:02:05.486501+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: audit 2026-03-08T23:02:06.760387+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: audit 2026-03-08T23:02:06.760387+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: audit 2026-03-08T23:02:06.766024+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: audit 2026-03-08T23:02:06.766024+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: audit 2026-03-08T23:02:06.770701+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: audit 2026-03-08T23:02:06.770701+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: cluster 2026-03-08T23:02:06.967671+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-08T23:02:08.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:07 vm06 bash[27746]: cluster 2026-03-08T23:02:06.967671+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: cluster 2026-03-08T23:02:05.486501+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: cluster 2026-03-08T23:02:05.486501+0000 mgr.y (mgr.14150) 203 : cluster [DBG] pgmap v181: 1 pgs: 1 active+clean; 449 KiB data, 161 MiB used, 120 GiB / 120 GiB avail 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: audit 2026-03-08T23:02:06.760387+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: audit 2026-03-08T23:02:06.760387+0000 mon.a (mon.0) 571 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: audit 2026-03-08T23:02:06.766024+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: audit 2026-03-08T23:02:06.766024+0000 mon.a (mon.0) 572 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: audit 2026-03-08T23:02:06.770701+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: audit 2026-03-08T23:02:06.770701+0000 mon.a (mon.0) 573 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: cluster 2026-03-08T23:02:06.967671+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-08T23:02:08.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:07 vm11 bash[23232]: cluster 2026-03-08T23:02:06.967671+0000 mon.a (mon.0) 574 : cluster [DBG] osdmap e46: 7 total, 7 up, 7 in 2026-03-08T23:02:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:08 vm11 bash[23232]: cluster 2026-03-08T23:02:07.486977+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:08 vm11 bash[23232]: cluster 2026-03-08T23:02:07.486977+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:08 vm11 bash[23232]: cluster 2026-03-08T23:02:07.962951+0000 mon.a (mon.0) 575 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-08T23:02:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:08 vm11 bash[23232]: cluster 2026-03-08T23:02:07.962951+0000 mon.a (mon.0) 575 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:08 vm06 bash[20625]: cluster 2026-03-08T23:02:07.486977+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:08 vm06 bash[20625]: cluster 2026-03-08T23:02:07.486977+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:08 vm06 bash[20625]: cluster 2026-03-08T23:02:07.962951+0000 mon.a (mon.0) 575 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:08 vm06 bash[20625]: cluster 2026-03-08T23:02:07.962951+0000 mon.a (mon.0) 575 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:08 vm06 bash[27746]: cluster 2026-03-08T23:02:07.486977+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:08 vm06 bash[27746]: cluster 2026-03-08T23:02:07.486977+0000 mgr.y (mgr.14150) 204 : cluster [DBG] pgmap v184: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:08 vm06 bash[27746]: cluster 2026-03-08T23:02:07.962951+0000 mon.a (mon.0) 575 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-08T23:02:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:08 vm06 bash[27746]: cluster 2026-03-08T23:02:07.962951+0000 mon.a (mon.0) 575 : cluster [WRN] Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY) 2026-03-08T23:02:11.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:10 vm11 bash[23232]: cluster 2026-03-08T23:02:09.487320+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:11.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:10 vm11 bash[23232]: cluster 2026-03-08T23:02:09.487320+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:11.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:10 vm06 bash[20625]: cluster 2026-03-08T23:02:09.487320+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:11.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:10 vm06 bash[20625]: cluster 2026-03-08T23:02:09.487320+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:11.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:10 vm06 bash[27746]: cluster 2026-03-08T23:02:09.487320+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:11.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:10 vm06 bash[27746]: cluster 2026-03-08T23:02:09.487320+0000 mgr.y (mgr.14150) 205 : cluster [DBG] pgmap v185: 1 pgs: 1 remapped+peering; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:11.511 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:11 vm06 bash[20625]: cluster 2026-03-08T23:02:11.791515+0000 mon.a (mon.0) 576 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:11 vm06 bash[20625]: cluster 2026-03-08T23:02:11.791515+0000 mon.a (mon.0) 576 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:11 vm06 bash[20625]: cluster 2026-03-08T23:02:11.791543+0000 mon.a (mon.0) 577 : cluster [INF] Cluster is now healthy 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:11 vm06 bash[20625]: cluster 2026-03-08T23:02:11.791543+0000 mon.a (mon.0) 577 : cluster [INF] Cluster is now healthy 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:11 vm06 bash[27746]: cluster 2026-03-08T23:02:11.791515+0000 mon.a (mon.0) 576 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:11 vm06 bash[27746]: cluster 2026-03-08T23:02:11.791515+0000 mon.a (mon.0) 576 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:11 vm06 bash[27746]: cluster 2026-03-08T23:02:11.791543+0000 mon.a (mon.0) 577 : cluster [INF] Cluster is now healthy 2026-03-08T23:02:12.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:11 vm06 bash[27746]: cluster 2026-03-08T23:02:11.791543+0000 mon.a (mon.0) 577 : cluster [INF] Cluster is now healthy 2026-03-08T23:02:12.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:11 vm11 bash[23232]: cluster 2026-03-08T23:02:11.791515+0000 mon.a (mon.0) 576 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-08T23:02:12.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:11 vm11 bash[23232]: cluster 2026-03-08T23:02:11.791515+0000 mon.a (mon.0) 576 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 1 pg peering) 2026-03-08T23:02:12.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:11 vm11 bash[23232]: cluster 2026-03-08T23:02:11.791543+0000 mon.a (mon.0) 577 : cluster [INF] Cluster is now healthy 2026-03-08T23:02:12.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:11 vm11 bash[23232]: cluster 2026-03-08T23:02:11.791543+0000 mon.a (mon.0) 577 : cluster [INF] Cluster is now healthy 2026-03-08T23:02:12.408 INFO:teuthology.orchestra.run.vm11.stdout: 2026-03-08T23:02:12.422 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch daemon add osd vm11:/dev/vdb 2026-03-08T23:02:13.170 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:12 vm11 bash[23232]: cluster 2026-03-08T23:02:11.487600+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:13.170 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:12 vm11 bash[23232]: cluster 2026-03-08T23:02:11.487600+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:12 vm06 bash[20625]: cluster 2026-03-08T23:02:11.487600+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:12 vm06 bash[20625]: cluster 2026-03-08T23:02:11.487600+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:12 vm06 bash[27746]: cluster 2026-03-08T23:02:11.487600+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:12 vm06 bash[27746]: cluster 2026-03-08T23:02:11.487600+0000 mgr.y (mgr.14150) 206 : cluster [DBG] pgmap v186: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: cephadm 2026-03-08T23:02:13.228013+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: cephadm 2026-03-08T23:02:13.228013+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.235832+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.235832+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.241209+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.241209+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.241934+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.241934+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.242360+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.242360+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.242674+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.242674+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: cephadm 2026-03-08T23:02:13.242916+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: cephadm 2026-03-08T23:02:13.242916+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: cephadm 2026-03-08T23:02:13.243222+0000 mgr.y (mgr.14150) 209 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: cephadm 2026-03-08T23:02:13.243222+0000 mgr.y (mgr.14150) 209 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.243524+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.243524+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.243951+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.243951+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.248424+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:14 vm06 bash[20625]: audit 2026-03-08T23:02:13.248424+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: cephadm 2026-03-08T23:02:13.228013+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: cephadm 2026-03-08T23:02:13.228013+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.235832+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.235832+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.241209+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.241209+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.241934+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.241934+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.242360+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.242360+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.242674+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.242674+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: cephadm 2026-03-08T23:02:13.242916+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: cephadm 2026-03-08T23:02:13.242916+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: cephadm 2026-03-08T23:02:13.243222+0000 mgr.y (mgr.14150) 209 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: cephadm 2026-03-08T23:02:13.243222+0000 mgr.y (mgr.14150) 209 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.243524+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.243524+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.243951+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.243951+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.248424+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:14 vm06 bash[27746]: audit 2026-03-08T23:02:13.248424+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: cephadm 2026-03-08T23:02:13.228013+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: cephadm 2026-03-08T23:02:13.228013+0000 mgr.y (mgr.14150) 207 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.235832+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.235832+0000 mon.a (mon.0) 578 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.241209+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.241209+0000 mon.a (mon.0) 579 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.241934+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.241934+0000 mon.a (mon.0) 580 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.242360+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.242360+0000 mon.a (mon.0) 581 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.242674+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.242674+0000 mon.a (mon.0) 582 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: cephadm 2026-03-08T23:02:13.242916+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: cephadm 2026-03-08T23:02:13.242916+0000 mgr.y (mgr.14150) 208 : cephadm [INF] Adjusting osd_memory_target on vm11 to 151.9M 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: cephadm 2026-03-08T23:02:13.243222+0000 mgr.y (mgr.14150) 209 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: cephadm 2026-03-08T23:02:13.243222+0000 mgr.y (mgr.14150) 209 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 159307229: error parsing value: Value '159307229' is below minimum 939524096 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.243524+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.243524+0000 mon.a (mon.0) 583 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.243951+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.243951+0000 mon.a (mon.0) 584 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.248424+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:14 vm11 bash[23232]: audit 2026-03-08T23:02:13.248424+0000 mon.a (mon.0) 585 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:15 vm06 bash[20625]: cluster 2026-03-08T23:02:13.487899+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:15 vm06 bash[20625]: cluster 2026-03-08T23:02:13.487899+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:15 vm06 bash[27746]: cluster 2026-03-08T23:02:13.487899+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:15 vm06 bash[27746]: cluster 2026-03-08T23:02:13.487899+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:15.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:15 vm11 bash[23232]: cluster 2026-03-08T23:02:13.487899+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:15.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:15 vm11 bash[23232]: cluster 2026-03-08T23:02:13.487899+0000 mgr.y (mgr.14150) 210 : cluster [DBG] pgmap v187: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:17.092 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:02:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:17 vm06 bash[20625]: cluster 2026-03-08T23:02:15.488161+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-08T23:02:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:17 vm06 bash[20625]: cluster 2026-03-08T23:02:15.488161+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-08T23:02:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:17 vm06 bash[27746]: cluster 2026-03-08T23:02:15.488161+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-08T23:02:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:17 vm06 bash[27746]: cluster 2026-03-08T23:02:15.488161+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-08T23:02:17.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:17 vm11 bash[23232]: cluster 2026-03-08T23:02:15.488161+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-08T23:02:17.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:17 vm11 bash[23232]: cluster 2026-03-08T23:02:15.488161+0000 mgr.y (mgr.14150) 211 : cluster [DBG] pgmap v188: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 47 KiB/s, 0 objects/s recovering 2026-03-08T23:02:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:18 vm06 bash[20625]: audit 2026-03-08T23:02:17.386254+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:02:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:18 vm06 bash[20625]: audit 2026-03-08T23:02:17.386254+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:02:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:18 vm06 bash[20625]: audit 2026-03-08T23:02:17.387617+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:02:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:18 vm06 bash[20625]: audit 2026-03-08T23:02:17.387617+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:02:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:18 vm06 bash[20625]: audit 2026-03-08T23:02:17.389386+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:18 vm06 bash[20625]: audit 2026-03-08T23:02:17.389386+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:18.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:18 vm06 bash[27746]: audit 2026-03-08T23:02:17.386254+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:02:18.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:18 vm06 bash[27746]: audit 2026-03-08T23:02:17.386254+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:02:18.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:18 vm06 bash[27746]: audit 2026-03-08T23:02:17.387617+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:02:18.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:18 vm06 bash[27746]: audit 2026-03-08T23:02:17.387617+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:02:18.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:18 vm06 bash[27746]: audit 2026-03-08T23:02:17.389386+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:18.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:18 vm06 bash[27746]: audit 2026-03-08T23:02:17.389386+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:18 vm11 bash[23232]: audit 2026-03-08T23:02:17.386254+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:02:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:18 vm11 bash[23232]: audit 2026-03-08T23:02:17.386254+0000 mon.a (mon.0) 586 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-08T23:02:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:18 vm11 bash[23232]: audit 2026-03-08T23:02:17.387617+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:02:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:18 vm11 bash[23232]: audit 2026-03-08T23:02:17.387617+0000 mon.a (mon.0) 587 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-08T23:02:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:18 vm11 bash[23232]: audit 2026-03-08T23:02:17.389386+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:18 vm11 bash[23232]: audit 2026-03-08T23:02:17.389386+0000 mon.a (mon.0) 588 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:19 vm06 bash[20625]: audit 2026-03-08T23:02:17.384846+0000 mgr.y (mgr.14150) 212 : audit [DBG] from='client.24280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:19 vm06 bash[20625]: audit 2026-03-08T23:02:17.384846+0000 mgr.y (mgr.14150) 212 : audit [DBG] from='client.24280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:19 vm06 bash[20625]: cluster 2026-03-08T23:02:17.488401+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:19 vm06 bash[20625]: cluster 2026-03-08T23:02:17.488401+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:19 vm06 bash[27746]: audit 2026-03-08T23:02:17.384846+0000 mgr.y (mgr.14150) 212 : audit [DBG] from='client.24280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:19 vm06 bash[27746]: audit 2026-03-08T23:02:17.384846+0000 mgr.y (mgr.14150) 212 : audit [DBG] from='client.24280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:19 vm06 bash[27746]: cluster 2026-03-08T23:02:17.488401+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:02:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:19 vm06 bash[27746]: cluster 2026-03-08T23:02:17.488401+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:02:19.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:19 vm11 bash[23232]: audit 2026-03-08T23:02:17.384846+0000 mgr.y (mgr.14150) 212 : audit [DBG] from='client.24280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:02:19.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:19 vm11 bash[23232]: audit 2026-03-08T23:02:17.384846+0000 mgr.y (mgr.14150) 212 : audit [DBG] from='client.24280 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm11:/dev/vdb", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:02:19.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:19 vm11 bash[23232]: cluster 2026-03-08T23:02:17.488401+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:02:19.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:19 vm11 bash[23232]: cluster 2026-03-08T23:02:17.488401+0000 mgr.y (mgr.14150) 213 : cluster [DBG] pgmap v189: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 43 KiB/s, 0 objects/s recovering 2026-03-08T23:02:21.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:21 vm11 bash[23232]: cluster 2026-03-08T23:02:19.488642+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:21.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:21 vm11 bash[23232]: cluster 2026-03-08T23:02:19.488642+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:21.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:21 vm06 bash[20625]: cluster 2026-03-08T23:02:19.488642+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:21.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:21 vm06 bash[20625]: cluster 2026-03-08T23:02:19.488642+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:21.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:21 vm06 bash[27746]: cluster 2026-03-08T23:02:19.488642+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:21.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:21 vm06 bash[27746]: cluster 2026-03-08T23:02:19.488642+0000 mgr.y (mgr.14150) 214 : cluster [DBG] pgmap v190: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: cluster 2026-03-08T23:02:21.488962+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: cluster 2026-03-08T23:02:21.488962+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.836883+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.111:0/2077650164' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.836883+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.111:0/2077650164' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.838376+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.838376+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.843755+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]': finished 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.843755+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]': finished 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: cluster 2026-03-08T23:02:22.849362+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: cluster 2026-03-08T23:02:22.849362+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.849802+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:23.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:23 vm11 bash[23232]: audit 2026-03-08T23:02:22.849802+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: cluster 2026-03-08T23:02:21.488962+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: cluster 2026-03-08T23:02:21.488962+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.836883+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.111:0/2077650164' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.836883+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.111:0/2077650164' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.838376+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.838376+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.843755+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]': finished 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.843755+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]': finished 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: cluster 2026-03-08T23:02:22.849362+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: cluster 2026-03-08T23:02:22.849362+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-08T23:02:23.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.849802+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:23 vm06 bash[20625]: audit 2026-03-08T23:02:22.849802+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: cluster 2026-03-08T23:02:21.488962+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: cluster 2026-03-08T23:02:21.488962+0000 mgr.y (mgr.14150) 215 : cluster [DBG] pgmap v191: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail; 37 KiB/s, 0 objects/s recovering 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.836883+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.111:0/2077650164' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.836883+0000 mon.b (mon.1) 22 : audit [INF] from='client.? 192.168.123.111:0/2077650164' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.838376+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.838376+0000 mon.a (mon.0) 589 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.843755+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]': finished 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.843755+0000 mon.a (mon.0) 590 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "29b40029-6843-47e4-b83e-af6cefd3e500"}]': finished 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: cluster 2026-03-08T23:02:22.849362+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: cluster 2026-03-08T23:02:22.849362+0000 mon.a (mon.0) 591 : cluster [DBG] osdmap e47: 8 total, 7 up, 8 in 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.849802+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:23.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:23 vm06 bash[27746]: audit 2026-03-08T23:02:22.849802+0000 mon.a (mon.0) 592 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:24.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:24 vm11 bash[23232]: audit 2026-03-08T23:02:23.482778+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.111:0/952369733' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:02:24.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:24 vm11 bash[23232]: audit 2026-03-08T23:02:23.482778+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.111:0/952369733' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:02:24.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:24 vm06 bash[20625]: audit 2026-03-08T23:02:23.482778+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.111:0/952369733' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:02:24.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:24 vm06 bash[20625]: audit 2026-03-08T23:02:23.482778+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.111:0/952369733' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:02:24.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:24 vm06 bash[27746]: audit 2026-03-08T23:02:23.482778+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.111:0/952369733' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:02:24.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:24 vm06 bash[27746]: audit 2026-03-08T23:02:23.482778+0000 mon.b (mon.1) 23 : audit [DBG] from='client.? 192.168.123.111:0/952369733' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-08T23:02:25.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:25 vm11 bash[23232]: cluster 2026-03-08T23:02:23.489272+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:25.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:25 vm11 bash[23232]: cluster 2026-03-08T23:02:23.489272+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:25.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:25 vm06 bash[20625]: cluster 2026-03-08T23:02:23.489272+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:25.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:25 vm06 bash[20625]: cluster 2026-03-08T23:02:23.489272+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:25.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:25 vm06 bash[27746]: cluster 2026-03-08T23:02:23.489272+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:25.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:25 vm06 bash[27746]: cluster 2026-03-08T23:02:23.489272+0000 mgr.y (mgr.14150) 216 : cluster [DBG] pgmap v193: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:27.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:27 vm06 bash[20625]: cluster 2026-03-08T23:02:25.489534+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:27.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:27 vm06 bash[20625]: cluster 2026-03-08T23:02:25.489534+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:27.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:27 vm06 bash[27746]: cluster 2026-03-08T23:02:25.489534+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:27.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:27 vm06 bash[27746]: cluster 2026-03-08T23:02:25.489534+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:27.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:27 vm11 bash[23232]: cluster 2026-03-08T23:02:25.489534+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:27.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:27 vm11 bash[23232]: cluster 2026-03-08T23:02:25.489534+0000 mgr.y (mgr.14150) 217 : cluster [DBG] pgmap v194: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:29.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:29 vm06 bash[20625]: cluster 2026-03-08T23:02:27.489827+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:29.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:29 vm06 bash[20625]: cluster 2026-03-08T23:02:27.489827+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:29.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:29 vm06 bash[27746]: cluster 2026-03-08T23:02:27.489827+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:29.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:29 vm06 bash[27746]: cluster 2026-03-08T23:02:27.489827+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:29.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:29 vm11 bash[23232]: cluster 2026-03-08T23:02:27.489827+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:29.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:29 vm11 bash[23232]: cluster 2026-03-08T23:02:27.489827+0000 mgr.y (mgr.14150) 218 : cluster [DBG] pgmap v195: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:30.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:30 vm06 bash[20625]: cluster 2026-03-08T23:02:29.490089+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:30.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:30 vm06 bash[20625]: cluster 2026-03-08T23:02:29.490089+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:30.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:30 vm06 bash[27746]: cluster 2026-03-08T23:02:29.490089+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:30.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:30 vm06 bash[27746]: cluster 2026-03-08T23:02:29.490089+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:30.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:30 vm11 bash[23232]: cluster 2026-03-08T23:02:29.490089+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:30.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:30 vm11 bash[23232]: cluster 2026-03-08T23:02:29.490089+0000 mgr.y (mgr.14150) 219 : cluster [DBG] pgmap v196: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:32.678 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:32 vm11 bash[23232]: cluster 2026-03-08T23:02:31.490350+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:32.678 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:32 vm11 bash[23232]: cluster 2026-03-08T23:02:31.490350+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:32.678 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:32 vm11 bash[23232]: audit 2026-03-08T23:02:32.418615+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:02:32.678 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:32 vm11 bash[23232]: audit 2026-03-08T23:02:32.418615+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:02:32.678 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:32 vm11 bash[23232]: audit 2026-03-08T23:02:32.419277+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:32.678 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:32 vm11 bash[23232]: audit 2026-03-08T23:02:32.419277+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:33.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:32 vm06 bash[20625]: cluster 2026-03-08T23:02:31.490350+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:32 vm06 bash[20625]: cluster 2026-03-08T23:02:31.490350+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:32 vm06 bash[20625]: audit 2026-03-08T23:02:32.418615+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:32 vm06 bash[20625]: audit 2026-03-08T23:02:32.418615+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:32 vm06 bash[20625]: audit 2026-03-08T23:02:32.419277+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:32 vm06 bash[20625]: audit 2026-03-08T23:02:32.419277+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:32 vm06 bash[27746]: cluster 2026-03-08T23:02:31.490350+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:32 vm06 bash[27746]: cluster 2026-03-08T23:02:31.490350+0000 mgr.y (mgr.14150) 220 : cluster [DBG] pgmap v197: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:32 vm06 bash[27746]: audit 2026-03-08T23:02:32.418615+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:32 vm06 bash[27746]: audit 2026-03-08T23:02:32.418615+0000 mon.a (mon.0) 593 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "osd.7"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:32 vm06 bash[27746]: audit 2026-03-08T23:02:32.419277+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:33.041 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:32 vm06 bash[27746]: audit 2026-03-08T23:02:32.419277+0000 mon.a (mon.0) 594 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:34.028 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:02:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.028 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:02:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.028 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:33 vm11 bash[23232]: cephadm 2026-03-08T23:02:32.420658+0000 mgr.y (mgr.14150) 221 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-08T23:02:34.028 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:33 vm11 bash[23232]: cephadm 2026-03-08T23:02:32.420658+0000 mgr.y (mgr.14150) 221 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-08T23:02:34.028 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.029 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:02:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.029 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:02:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:33 vm06 bash[20625]: cephadm 2026-03-08T23:02:32.420658+0000 mgr.y (mgr.14150) 221 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-08T23:02:34.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:33 vm06 bash[20625]: cephadm 2026-03-08T23:02:32.420658+0000 mgr.y (mgr.14150) 221 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-08T23:02:34.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:33 vm06 bash[27746]: cephadm 2026-03-08T23:02:32.420658+0000 mgr.y (mgr.14150) 221 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-08T23:02:34.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:33 vm06 bash[27746]: cephadm 2026-03-08T23:02:32.420658+0000 mgr.y (mgr.14150) 221 : cephadm [INF] Deploying daemon osd.7 on vm11 2026-03-08T23:02:34.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.307 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:02:34 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.307 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:02:34 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.308 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:02:34 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:34.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:02:34 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: cluster 2026-03-08T23:02:33.490567+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: cluster 2026-03-08T23:02:33.490567+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: audit 2026-03-08T23:02:34.178225+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: audit 2026-03-08T23:02:34.178225+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: audit 2026-03-08T23:02:34.187701+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: audit 2026-03-08T23:02:34.187701+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: audit 2026-03-08T23:02:34.195782+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:34 vm06 bash[20625]: audit 2026-03-08T23:02:34.195782+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: cluster 2026-03-08T23:02:33.490567+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: cluster 2026-03-08T23:02:33.490567+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:35.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: audit 2026-03-08T23:02:34.178225+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: audit 2026-03-08T23:02:34.178225+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: audit 2026-03-08T23:02:34.187701+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: audit 2026-03-08T23:02:34.187701+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: audit 2026-03-08T23:02:34.195782+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:34 vm06 bash[27746]: audit 2026-03-08T23:02:34.195782+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: cluster 2026-03-08T23:02:33.490567+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: cluster 2026-03-08T23:02:33.490567+0000 mgr.y (mgr.14150) 222 : cluster [DBG] pgmap v198: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: audit 2026-03-08T23:02:34.178225+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: audit 2026-03-08T23:02:34.178225+0000 mon.a (mon.0) 595 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: audit 2026-03-08T23:02:34.187701+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: audit 2026-03-08T23:02:34.187701+0000 mon.a (mon.0) 596 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: audit 2026-03-08T23:02:34.195782+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:35.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:34 vm11 bash[23232]: audit 2026-03-08T23:02:34.195782+0000 mon.a (mon.0) 597 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:37.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:36 vm06 bash[20625]: cluster 2026-03-08T23:02:35.490799+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:37.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:36 vm06 bash[20625]: cluster 2026-03-08T23:02:35.490799+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:37.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:36 vm06 bash[27746]: cluster 2026-03-08T23:02:35.490799+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:37.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:36 vm06 bash[27746]: cluster 2026-03-08T23:02:35.490799+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:37.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:36 vm11 bash[23232]: cluster 2026-03-08T23:02:35.490799+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:37.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:36 vm11 bash[23232]: cluster 2026-03-08T23:02:35.490799+0000 mgr.y (mgr.14150) 223 : cluster [DBG] pgmap v199: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:39 vm06 bash[20625]: cluster 2026-03-08T23:02:37.491118+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:39 vm06 bash[20625]: cluster 2026-03-08T23:02:37.491118+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:39 vm06 bash[20625]: audit 2026-03-08T23:02:38.226000+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:39 vm06 bash[20625]: audit 2026-03-08T23:02:38.226000+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:39 vm06 bash[20625]: audit 2026-03-08T23:02:38.227552+0000 mon.a (mon.0) 598 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:39 vm06 bash[20625]: audit 2026-03-08T23:02:38.227552+0000 mon.a (mon.0) 598 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:39 vm06 bash[27746]: cluster 2026-03-08T23:02:37.491118+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:39 vm06 bash[27746]: cluster 2026-03-08T23:02:37.491118+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:39 vm06 bash[27746]: audit 2026-03-08T23:02:38.226000+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:39 vm06 bash[27746]: audit 2026-03-08T23:02:38.226000+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:39 vm06 bash[27746]: audit 2026-03-08T23:02:38.227552+0000 mon.a (mon.0) 598 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:39 vm06 bash[27746]: audit 2026-03-08T23:02:38.227552+0000 mon.a (mon.0) 598 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:39 vm11 bash[23232]: cluster 2026-03-08T23:02:37.491118+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:39 vm11 bash[23232]: cluster 2026-03-08T23:02:37.491118+0000 mgr.y (mgr.14150) 224 : cluster [DBG] pgmap v200: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:39 vm11 bash[23232]: audit 2026-03-08T23:02:38.226000+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:39 vm11 bash[23232]: audit 2026-03-08T23:02:38.226000+0000 mon.b (mon.1) 24 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:39 vm11 bash[23232]: audit 2026-03-08T23:02:38.227552+0000 mon.a (mon.0) 598 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:39.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:39 vm11 bash[23232]: audit 2026-03-08T23:02:38.227552+0000 mon.a (mon.0) 598 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]: dispatch 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.095178+0000 mon.a (mon.0) 599 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.095178+0000 mon.a (mon.0) 599 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.097477+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.097477+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: cluster 2026-03-08T23:02:39.099953+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: cluster 2026-03-08T23:02:39.099953+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.100751+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.100751+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.100824+0000 mon.a (mon.0) 602 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:39.100824+0000 mon.a (mon.0) 602 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:40.097852+0000 mon.a (mon.0) 603 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: audit 2026-03-08T23:02:40.097852+0000 mon.a (mon.0) 603 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: cluster 2026-03-08T23:02:40.103969+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:40 vm06 bash[27746]: cluster 2026-03-08T23:02:40.103969+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.095178+0000 mon.a (mon.0) 599 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.095178+0000 mon.a (mon.0) 599 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.097477+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.097477+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: cluster 2026-03-08T23:02:39.099953+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: cluster 2026-03-08T23:02:39.099953+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.100751+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.100751+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.100824+0000 mon.a (mon.0) 602 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:39.100824+0000 mon.a (mon.0) 602 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:40.097852+0000 mon.a (mon.0) 603 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: audit 2026-03-08T23:02:40.097852+0000 mon.a (mon.0) 603 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: cluster 2026-03-08T23:02:40.103969+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-08T23:02:40.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:40 vm06 bash[20625]: cluster 2026-03-08T23:02:40.103969+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-08T23:02:40.553 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.095178+0000 mon.a (mon.0) 599 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:02:40.553 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.095178+0000 mon.a (mon.0) 599 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["7"]}]': finished 2026-03-08T23:02:40.553 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.097477+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.097477+0000 mon.b (mon.1) 25 : audit [INF] from='osd.7 v2:192.168.123.111:6812/5515467' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: cluster 2026-03-08T23:02:39.099953+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: cluster 2026-03-08T23:02:39.099953+0000 mon.a (mon.0) 600 : cluster [DBG] osdmap e48: 8 total, 7 up, 8 in 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.100751+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.100751+0000 mon.a (mon.0) 601 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.100824+0000 mon.a (mon.0) 602 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:39.100824+0000 mon.a (mon.0) 602 : audit [INF] from='osd.7 ' entity='osd.7' cmd=[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]: dispatch 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:40.097852+0000 mon.a (mon.0) 603 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: audit 2026-03-08T23:02:40.097852+0000 mon.a (mon.0) 603 : audit [INF] from='osd.7 ' entity='osd.7' cmd='[{"prefix": "osd crush create-or-move", "id": 7, "weight":0.0195, "args": ["host=vm11", "root=default"]}]': finished 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: cluster 2026-03-08T23:02:40.103969+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-08T23:02:40.554 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:40 vm11 bash[23232]: cluster 2026-03-08T23:02:40.103969+0000 mon.a (mon.0) 604 : cluster [DBG] osdmap e49: 8 total, 7 up, 8 in 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: cluster 2026-03-08T23:02:39.491385+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: cluster 2026-03-08T23:02:39.491385+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.104270+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.104270+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.111532+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.111532+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.390921+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.390921+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.407822+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.407822+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.409635+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.409635+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.410540+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.410540+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.419265+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.419265+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: cluster 2026-03-08T23:02:40.975452+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: cluster 2026-03-08T23:02:40.975452+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.975666+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:40.975666+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:41.105279+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.500 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:41 vm11 bash[23232]: audit 2026-03-08T23:02:41.105279+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: cluster 2026-03-08T23:02:39.491385+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: cluster 2026-03-08T23:02:39.491385+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.104270+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.104270+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.111532+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.111532+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.390921+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.390921+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.407822+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.407822+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.409635+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.409635+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.410540+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.410540+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.419265+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.419265+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: cluster 2026-03-08T23:02:40.975452+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: cluster 2026-03-08T23:02:40.975452+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.975666+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:40.975666+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:41.105279+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:41 vm06 bash[20625]: audit 2026-03-08T23:02:41.105279+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: cluster 2026-03-08T23:02:39.491385+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: cluster 2026-03-08T23:02:39.491385+0000 mgr.y (mgr.14150) 225 : cluster [DBG] pgmap v202: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.104270+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.104270+0000 mon.a (mon.0) 605 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.111532+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.111532+0000 mon.a (mon.0) 606 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.390921+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.390921+0000 mon.a (mon.0) 607 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.407822+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.407822+0000 mon.a (mon.0) 608 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.409635+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.409635+0000 mon.a (mon.0) 609 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.410540+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.410540+0000 mon.a (mon.0) 610 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.419265+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.419265+0000 mon.a (mon.0) 611 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: cluster 2026-03-08T23:02:40.975452+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: cluster 2026-03-08T23:02:40.975452+0000 mon.a (mon.0) 612 : cluster [DBG] osdmap e50: 8 total, 7 up, 8 in 2026-03-08T23:02:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.975666+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:40.975666+0000 mon.a (mon.0) 613 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:41.105279+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:41 vm06 bash[27746]: audit 2026-03-08T23:02:41.105279+0000 mon.a (mon.0) 614 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:41.734 INFO:teuthology.orchestra.run.vm11.stdout:Created osd(s) 7 on host 'vm11' 2026-03-08T23:02:41.828 DEBUG:teuthology.orchestra.run.vm11:osd.7> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.7.service 2026-03-08T23:02:41.829 INFO:tasks.cephadm:Waiting for 8 OSDs to come up... 2026-03-08T23:02:41.829 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd stat -f json 2026-03-08T23:02:42.136 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:39.212224+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:39.212224+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:39.212273+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:39.212273+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.507141+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.507141+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.705644+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.705644+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.725798+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.725798+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.730722+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.730722+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:41.978748+0000 mon.a (mon.0) 619 : cluster [INF] osd.7 v2:192.168.123.111:6812/5515467 boot 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:41.978748+0000 mon.a (mon.0) 619 : cluster [INF] osd.7 v2:192.168.123.111:6812/5515467 boot 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:41.978774+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: cluster 2026-03-08T23:02:41.978774+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.979239+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:42.137 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:42 vm11 bash[23232]: audit 2026-03-08T23:02:41.979239+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:39.212224+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:39.212224+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:39.212273+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:39.212273+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.507141+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.507141+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.705644+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.705644+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.725798+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.725798+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.730722+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.730722+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:41.978748+0000 mon.a (mon.0) 619 : cluster [INF] osd.7 v2:192.168.123.111:6812/5515467 boot 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:41.978748+0000 mon.a (mon.0) 619 : cluster [INF] osd.7 v2:192.168.123.111:6812/5515467 boot 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:41.978774+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: cluster 2026-03-08T23:02:41.978774+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.979239+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:42 vm06 bash[20625]: audit 2026-03-08T23:02:41.979239+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:39.212224+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:39.212224+0000 osd.7 (osd.7) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:39.212273+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:39.212273+0000 osd.7 (osd.7) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.507141+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.507141+0000 mon.a (mon.0) 615 : audit [INF] from='osd.7 ' entity='osd.7' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.705644+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.705644+0000 mon.a (mon.0) 616 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.725798+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.725798+0000 mon.a (mon.0) 617 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.730722+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.730722+0000 mon.a (mon.0) 618 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:41.978748+0000 mon.a (mon.0) 619 : cluster [INF] osd.7 v2:192.168.123.111:6812/5515467 boot 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:41.978748+0000 mon.a (mon.0) 619 : cluster [INF] osd.7 v2:192.168.123.111:6812/5515467 boot 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:41.978774+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: cluster 2026-03-08T23:02:41.978774+0000 mon.a (mon.0) 620 : cluster [DBG] osdmap e51: 8 total, 8 up, 8 in 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.979239+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:42.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:42 vm06 bash[27746]: audit 2026-03-08T23:02:41.979239+0000 mon.a (mon.0) 621 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:02:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:43 vm06 bash[20625]: cluster 2026-03-08T23:02:41.491668+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:43 vm06 bash[20625]: cluster 2026-03-08T23:02:41.491668+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:43 vm06 bash[20625]: cluster 2026-03-08T23:02:42.982792+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:02:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:43 vm06 bash[20625]: cluster 2026-03-08T23:02:42.982792+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:02:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:43 vm06 bash[27746]: cluster 2026-03-08T23:02:41.491668+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:43 vm06 bash[27746]: cluster 2026-03-08T23:02:41.491668+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:43.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:43 vm06 bash[27746]: cluster 2026-03-08T23:02:42.982792+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:02:43.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:43 vm06 bash[27746]: cluster 2026-03-08T23:02:42.982792+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:02:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:43 vm11 bash[23232]: cluster 2026-03-08T23:02:41.491668+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:43 vm11 bash[23232]: cluster 2026-03-08T23:02:41.491668+0000 mgr.y (mgr.14150) 226 : cluster [DBG] pgmap v205: 1 pgs: 1 active+clean; 449 KiB data, 187 MiB used, 140 GiB / 140 GiB avail 2026-03-08T23:02:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:43 vm11 bash[23232]: cluster 2026-03-08T23:02:42.982792+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:02:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:43 vm11 bash[23232]: cluster 2026-03-08T23:02:42.982792+0000 mon.a (mon.0) 622 : cluster [DBG] osdmap e52: 8 total, 8 up, 8 in 2026-03-08T23:02:44.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:44 vm06 bash[20625]: cluster 2026-03-08T23:02:43.491979+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:44.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:44 vm06 bash[20625]: cluster 2026-03-08T23:02:43.491979+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:44.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:44 vm06 bash[27746]: cluster 2026-03-08T23:02:43.491979+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:44.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:44 vm06 bash[27746]: cluster 2026-03-08T23:02:43.491979+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:44.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:44 vm11 bash[23232]: cluster 2026-03-08T23:02:43.491979+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:44.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:44 vm11 bash[23232]: cluster 2026-03-08T23:02:43.491979+0000 mgr.y (mgr.14150) 227 : cluster [DBG] pgmap v208: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:46.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:45 vm06 bash[20625]: cluster 2026-03-08T23:02:44.453182+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:02:46.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:45 vm06 bash[20625]: cluster 2026-03-08T23:02:44.453182+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:02:46.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:45 vm06 bash[27746]: cluster 2026-03-08T23:02:44.453182+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:02:46.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:45 vm06 bash[27746]: cluster 2026-03-08T23:02:44.453182+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:02:46.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:45 vm11 bash[23232]: cluster 2026-03-08T23:02:44.453182+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:02:46.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:45 vm11 bash[23232]: cluster 2026-03-08T23:02:44.453182+0000 mon.a (mon.0) 623 : cluster [DBG] osdmap e53: 8 total, 8 up, 8 in 2026-03-08T23:02:46.463 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:02:46.990 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:46 vm11 bash[23232]: cluster 2026-03-08T23:02:45.492247+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:46.990 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:46 vm11 bash[23232]: cluster 2026-03-08T23:02:45.492247+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:47.085 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:02:47.276 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:46 vm06 bash[20625]: cluster 2026-03-08T23:02:45.492247+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:47.276 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:46 vm06 bash[20625]: cluster 2026-03-08T23:02:45.492247+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:47.276 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:46 vm06 bash[27746]: cluster 2026-03-08T23:02:45.492247+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:47.276 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:46 vm06 bash[27746]: cluster 2026-03-08T23:02:45.492247+0000 mgr.y (mgr.14150) 228 : cluster [DBG] pgmap v210: 1 pgs: 1 active+clean; 449 KiB data, 534 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:47.277 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":53,"num_osds":8,"num_up_osds":8,"osd_up_since":1773010961,"num_in_osds":8,"osd_in_since":1773010942,"num_remapped_pgs":0} 2026-03-08T23:02:47.277 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd dump --format=json 2026-03-08T23:02:48.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:47 vm11 bash[23232]: audit 2026-03-08T23:02:47.085765+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.106:0/1507858829' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:02:48.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:47 vm11 bash[23232]: audit 2026-03-08T23:02:47.085765+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.106:0/1507858829' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:02:48.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:47 vm06 bash[20625]: audit 2026-03-08T23:02:47.085765+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.106:0/1507858829' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:02:48.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:47 vm06 bash[20625]: audit 2026-03-08T23:02:47.085765+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.106:0/1507858829' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:02:48.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:47 vm06 bash[27746]: audit 2026-03-08T23:02:47.085765+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.106:0/1507858829' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:02:48.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:47 vm06 bash[27746]: audit 2026-03-08T23:02:47.085765+0000 mon.c (mon.2) 11 : audit [DBG] from='client.? 192.168.123.106:0/1507858829' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cluster 2026-03-08T23:02:47.492633+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 615 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cluster 2026-03-08T23:02:47.492633+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 615 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cephadm 2026-03-08T23:02:48.142549+0000 mgr.y (mgr.14150) 230 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cephadm 2026-03-08T23:02:48.142549+0000 mgr.y (mgr.14150) 230 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.148718+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.148718+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.153668+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.153668+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.155042+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.155042+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.155609+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.155609+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.156108+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.156108+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.156553+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.156553+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cephadm 2026-03-08T23:02:48.156931+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cephadm 2026-03-08T23:02:48.156931+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cephadm 2026-03-08T23:02:48.157439+0000 mgr.y (mgr.14150) 232 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: cephadm 2026-03-08T23:02:48.157439+0000 mgr.y (mgr.14150) 232 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.157744+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.157744+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.158214+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.158214+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.162671+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:49 vm06 bash[20625]: audit 2026-03-08T23:02:48.162671+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cluster 2026-03-08T23:02:47.492633+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 615 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cluster 2026-03-08T23:02:47.492633+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 615 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cephadm 2026-03-08T23:02:48.142549+0000 mgr.y (mgr.14150) 230 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cephadm 2026-03-08T23:02:48.142549+0000 mgr.y (mgr.14150) 230 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.148718+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.148718+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.153668+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.153668+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.155042+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.155042+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.155609+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.155609+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.156108+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.156108+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.156553+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.156553+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cephadm 2026-03-08T23:02:48.156931+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cephadm 2026-03-08T23:02:48.156931+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cephadm 2026-03-08T23:02:48.157439+0000 mgr.y (mgr.14150) 232 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: cephadm 2026-03-08T23:02:48.157439+0000 mgr.y (mgr.14150) 232 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.157744+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.157744+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.158214+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.158214+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.162671+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:49 vm06 bash[27746]: audit 2026-03-08T23:02:48.162671+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cluster 2026-03-08T23:02:47.492633+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 615 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cluster 2026-03-08T23:02:47.492633+0000 mgr.y (mgr.14150) 229 : cluster [DBG] pgmap v211: 1 pgs: 1 active+clean; 449 KiB data, 615 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cephadm 2026-03-08T23:02:48.142549+0000 mgr.y (mgr.14150) 230 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cephadm 2026-03-08T23:02:48.142549+0000 mgr.y (mgr.14150) 230 : cephadm [INF] Detected new or changed devices on vm11 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.148718+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.148718+0000 mon.a (mon.0) 624 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.153668+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.153668+0000 mon.a (mon.0) 625 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.155042+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.155042+0000 mon.a (mon.0) 626 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.155609+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.155609+0000 mon.a (mon.0) 627 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.156108+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.156108+0000 mon.a (mon.0) 628 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.6", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.156553+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.156553+0000 mon.a (mon.0) 629 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd.7", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cephadm 2026-03-08T23:02:48.156931+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cephadm 2026-03-08T23:02:48.156931+0000 mgr.y (mgr.14150) 231 : cephadm [INF] Adjusting osd_memory_target on vm11 to 113.9M 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cephadm 2026-03-08T23:02:48.157439+0000 mgr.y (mgr.14150) 232 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: cephadm 2026-03-08T23:02:48.157439+0000 mgr.y (mgr.14150) 232 : cephadm [WRN] Unable to set osd_memory_target on vm11 to 119480422: error parsing value: Value '119480422' is below minimum 939524096 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.157744+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.157744+0000 mon.a (mon.0) 630 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.158214+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.158214+0000 mon.a (mon.0) 631 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.162671+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:49 vm11 bash[23232]: audit 2026-03-08T23:02:48.162671+0000 mon.a (mon.0) 632 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:02:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:51 vm06 bash[20625]: cluster 2026-03-08T23:02:49.492905+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:51 vm06 bash[20625]: cluster 2026-03-08T23:02:49.492905+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:51 vm06 bash[27746]: cluster 2026-03-08T23:02:49.492905+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:51 vm06 bash[27746]: cluster 2026-03-08T23:02:49.492905+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:51.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:51 vm11 bash[23232]: cluster 2026-03-08T23:02:49.492905+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:51.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:51 vm11 bash[23232]: cluster 2026-03-08T23:02:49.492905+0000 mgr.y (mgr.14150) 233 : cluster [DBG] pgmap v212: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:51.910 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:02:52.172 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:02:52.172 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":53,"fsid":"e2eb96e6-1b41-11f1-83e5-75f1b5373d30","created":"2026-03-08T22:56:50.043169+0000","modified":"2026-03-08T23:02:43.975835+0000","last_up_change":"2026-03-08T23:02:41.969578+0000","last_in_change":"2026-03-08T23:02:22.838818+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-08T22:59:49.511510+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"f584135b-773d-4be0-b5f4-b849576faa2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":51,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6801","nonce":1756339851}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":1756339851}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":1756339851}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6803","nonce":1756339851}]},"public_addr":"192.168.123.106:6801/1756339851","cluster_addr":"192.168.123.106:6802/1756339851","heartbeat_back_addr":"192.168.123.106:6804/1756339851","heartbeat_front_addr":"192.168.123.106:6803/1756339851","state":["exists","up"]},{"osd":1,"uuid":"2022422b-3e71-4162-b64b-3d25e2ad079e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":33,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6805","nonce":2598119140}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":2598119140}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":2598119140}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6807","nonce":2598119140}]},"public_addr":"192.168.123.106:6805/2598119140","cluster_addr":"192.168.123.106:6806/2598119140","heartbeat_back_addr":"192.168.123.106:6808/2598119140","heartbeat_front_addr":"192.168.123.106:6807/2598119140","state":["exists","up"]},{"osd":2,"uuid":"127338cf-5856-4d11-8a9b-9cbd216d8507","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6809","nonce":2508962009}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":2508962009}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":2508962009}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6811","nonce":2508962009}]},"public_addr":"192.168.123.106:6809/2508962009","cluster_addr":"192.168.123.106:6810/2508962009","heartbeat_back_addr":"192.168.123.106:6812/2508962009","heartbeat_front_addr":"192.168.123.106:6811/2508962009","state":["exists","up"]},{"osd":3,"uuid":"19da1389-a7b0-483c-b2d4-8be50f26c1c4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6813","nonce":3847325262}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":3847325262}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":3847325262}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6815","nonce":3847325262}]},"public_addr":"192.168.123.106:6813/3847325262","cluster_addr":"192.168.123.106:6814/3847325262","heartbeat_back_addr":"192.168.123.106:6816/3847325262","heartbeat_front_addr":"192.168.123.106:6815/3847325262","state":["exists","up"]},{"osd":4,"uuid":"2b8b0ad5-79bc-4b4c-a515-bc6c029f416f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6800","nonce":1718317342}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6801","nonce":1718317342}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6803","nonce":1718317342}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6802","nonce":1718317342}]},"public_addr":"192.168.123.111:6800/1718317342","cluster_addr":"192.168.123.111:6801/1718317342","heartbeat_back_addr":"192.168.123.111:6803/1718317342","heartbeat_front_addr":"192.168.123.111:6802/1718317342","state":["exists","up"]},{"osd":5,"uuid":"ebf4133c-ae3a-4afe-9e9e-4c894f65f53e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":39,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6804","nonce":3102108212}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6805","nonce":3102108212}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6807","nonce":3102108212}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6806","nonce":3102108212}]},"public_addr":"192.168.123.111:6804/3102108212","cluster_addr":"192.168.123.111:6805/3102108212","heartbeat_back_addr":"192.168.123.111:6807/3102108212","heartbeat_front_addr":"192.168.123.111:6806/3102108212","state":["exists","up"]},{"osd":6,"uuid":"1359b0d9-00db-474d-93f0-8246b9a8fa82","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":45,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6808","nonce":3646507391}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6809","nonce":3646507391}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6811","nonce":3646507391}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6810","nonce":3646507391}]},"public_addr":"192.168.123.111:6808/3646507391","cluster_addr":"192.168.123.111:6809/3646507391","heartbeat_back_addr":"192.168.123.111:6811/3646507391","heartbeat_front_addr":"192.168.123.111:6810/3646507391","state":["exists","up"]},{"osd":7,"uuid":"29b40029-6843-47e4-b83e-af6cefd3e500","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":52,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6812","nonce":5515467}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6813","nonce":5515467}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6815","nonce":5515467}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6814","nonce":5515467}]},"public_addr":"192.168.123.111:6812/5515467","cluster_addr":"192.168.123.111:6813/5515467","heartbeat_back_addr":"192.168.123.111:6815/5515467","heartbeat_front_addr":"192.168.123.111:6814/5515467","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:58:36.480874+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:59:10.436552+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:59:44.575086+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:00:19.486922+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:00:54.034338+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:01:29.981522+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:02:03.219993+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:02:39.212275+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.106:6800/1580927884":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/410680846":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/1740051211":"2026-03-09T22:57:11.437532+0000","192.168.123.106:6800/1101559289":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/2787078610":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/1313816001":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/1915233046":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/2523815248":"2026-03-09T22:57:01.174562+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-08T23:02:52.241 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-08T22:59:49.511510+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '22', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 7.889999866485596, 'score_stable': 7.889999866485596, 'optimal_score': 0.3799999952316284, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-08T23:02:52.242 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd pool get .mgr pg_num 2026-03-08T23:02:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:52 vm06 bash[20625]: audit 2026-03-08T23:02:52.171365+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.106:0/3691772630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:02:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:52 vm06 bash[20625]: audit 2026-03-08T23:02:52.171365+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.106:0/3691772630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:02:52.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:52 vm06 bash[27746]: audit 2026-03-08T23:02:52.171365+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.106:0/3691772630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:02:52.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:52 vm06 bash[27746]: audit 2026-03-08T23:02:52.171365+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.106:0/3691772630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:02:52.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:52 vm11 bash[23232]: audit 2026-03-08T23:02:52.171365+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.106:0/3691772630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:02:52.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:52 vm11 bash[23232]: audit 2026-03-08T23:02:52.171365+0000 mon.b (mon.1) 26 : audit [DBG] from='client.? 192.168.123.106:0/3691772630' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:02:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:53 vm06 bash[20625]: cluster 2026-03-08T23:02:51.493205+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:53 vm06 bash[20625]: cluster 2026-03-08T23:02:51.493205+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:53 vm06 bash[27746]: cluster 2026-03-08T23:02:51.493205+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:53 vm06 bash[27746]: cluster 2026-03-08T23:02:51.493205+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:53.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:53 vm11 bash[23232]: cluster 2026-03-08T23:02:51.493205+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:53.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:53 vm11 bash[23232]: cluster 2026-03-08T23:02:51.493205+0000 mgr.y (mgr.14150) 234 : cluster [DBG] pgmap v213: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:55 vm06 bash[20625]: cluster 2026-03-08T23:02:53.493475+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:55 vm06 bash[20625]: cluster 2026-03-08T23:02:53.493475+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:55 vm06 bash[27746]: cluster 2026-03-08T23:02:53.493475+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:55 vm06 bash[27746]: cluster 2026-03-08T23:02:53.493475+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:55 vm11 bash[23232]: cluster 2026-03-08T23:02:53.493475+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:55 vm11 bash[23232]: cluster 2026-03-08T23:02:53.493475+0000 mgr.y (mgr.14150) 235 : cluster [DBG] pgmap v214: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:55.932 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:02:56.192 INFO:teuthology.orchestra.run.vm06.stdout:pg_num: 1 2026-03-08T23:02:56.647 INFO:tasks.cephadm:Adding ceph.rgw.foo.a on vm06 2026-03-08T23:02:56.647 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply rgw foo.a --placement '1;vm06=foo.a' 2026-03-08T23:02:56.944 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:56 vm06 bash[20625]: audit 2026-03-08T23:02:56.192120+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.106:0/770899378' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:02:56.944 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:56 vm06 bash[20625]: audit 2026-03-08T23:02:56.192120+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.106:0/770899378' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:02:56.944 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:56 vm06 bash[27746]: audit 2026-03-08T23:02:56.192120+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.106:0/770899378' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:02:56.944 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:56 vm06 bash[27746]: audit 2026-03-08T23:02:56.192120+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.106:0/770899378' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:02:57.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:56 vm11 bash[23232]: audit 2026-03-08T23:02:56.192120+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.106:0/770899378' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:02:57.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:56 vm11 bash[23232]: audit 2026-03-08T23:02:56.192120+0000 mon.c (mon.2) 12 : audit [DBG] from='client.? 192.168.123.106:0/770899378' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-08T23:02:58.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:57 vm06 bash[20625]: cluster 2026-03-08T23:02:55.493755+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:58.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:57 vm06 bash[20625]: cluster 2026-03-08T23:02:55.493755+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:58.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:57 vm06 bash[27746]: cluster 2026-03-08T23:02:55.493755+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:58.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:57 vm06 bash[27746]: cluster 2026-03-08T23:02:55.493755+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:58.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:57 vm11 bash[23232]: cluster 2026-03-08T23:02:55.493755+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:58.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:57 vm11 bash[23232]: cluster 2026-03-08T23:02:55.493755+0000 mgr.y (mgr.14150) 236 : cluster [DBG] pgmap v215: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:59.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:58 vm06 bash[20625]: cluster 2026-03-08T23:02:57.494015+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:59.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:02:58 vm06 bash[20625]: cluster 2026-03-08T23:02:57.494015+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:59.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:58 vm06 bash[27746]: cluster 2026-03-08T23:02:57.494015+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:59.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:02:58 vm06 bash[27746]: cluster 2026-03-08T23:02:57.494015+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:59.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:58 vm11 bash[23232]: cluster 2026-03-08T23:02:57.494015+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:02:59.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:02:58 vm11 bash[23232]: cluster 2026-03-08T23:02:57.494015+0000 mgr.y (mgr.14150) 237 : cluster [DBG] pgmap v216: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:00 vm06 bash[20625]: cluster 2026-03-08T23:02:59.494276+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:00 vm06 bash[20625]: cluster 2026-03-08T23:02:59.494276+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:00 vm06 bash[27746]: cluster 2026-03-08T23:02:59.494276+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:00 vm06 bash[27746]: cluster 2026-03-08T23:02:59.494276+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:00 vm11 bash[23232]: cluster 2026-03-08T23:02:59.494276+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:00 vm11 bash[23232]: cluster 2026-03-08T23:02:59.494276+0000 mgr.y (mgr.14150) 238 : cluster [DBG] pgmap v217: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:01.271 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:01.658 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled rgw.foo.a update... 2026-03-08T23:03:01.719 DEBUG:teuthology.orchestra.run.vm06:rgw.foo.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@rgw.foo.a.service 2026-03-08T23:03:01.720 INFO:tasks.cephadm:Adding ceph.iscsi.iscsi.a on vm11 2026-03-08T23:03:01.720 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd pool create datapool 3 3 replicated 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: cluster 2026-03-08T23:03:01.494508+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: cluster 2026-03-08T23:03:01.494508+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:01.614974+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm06=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:01.614974+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm06=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: cephadm 2026-03-08T23:03:01.615902+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: cephadm 2026-03-08T23:03:01.615902+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:01.657879+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:01.657879+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:01.658808+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:01.658808+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.005827+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.005827+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.006473+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.006473+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.025529+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.025529+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.027548+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.027548+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.034988+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.034988+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.041456+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.041456+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.897 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.044512+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: audit 2026-03-08T23:03:02.044512+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: cephadm 2026-03-08T23:03:02.045190+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Deploying daemon rgw.foo.a on vm06 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 bash[20625]: cephadm 2026-03-08T23:03:02.045190+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Deploying daemon rgw.foo.a on vm06 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: cluster 2026-03-08T23:03:01.494508+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: cluster 2026-03-08T23:03:01.494508+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:01.614974+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm06=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:01.614974+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm06=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: cephadm 2026-03-08T23:03:01.615902+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: cephadm 2026-03-08T23:03:01.615902+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:01.657879+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:01.657879+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:01.658808+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:01.658808+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.005827+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.005827+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.006473+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.006473+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.025529+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.025529+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.027548+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.027548+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-08T23:03:02.898 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.034988+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.034988+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.041456+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.041456+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.044512+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: audit 2026-03-08T23:03:02.044512+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: cephadm 2026-03-08T23:03:02.045190+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Deploying daemon rgw.foo.a on vm06 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 bash[27746]: cephadm 2026-03-08T23:03:02.045190+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Deploying daemon rgw.foo.a on vm06 2026-03-08T23:03:02.899 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:02.899 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:03:02 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: cluster 2026-03-08T23:03:01.494508+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: cluster 2026-03-08T23:03:01.494508+0000 mgr.y (mgr.14150) 239 : cluster [DBG] pgmap v218: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:01.614974+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm06=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:01.614974+0000 mgr.y (mgr.14150) 240 : audit [DBG] from='client.14391 -' entity='client.admin' cmd=[{"prefix": "orch apply rgw", "svc_id": "foo.a", "placement": "1;vm06=foo.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: cephadm 2026-03-08T23:03:01.615902+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: cephadm 2026-03-08T23:03:01.615902+0000 mgr.y (mgr.14150) 241 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:01.657879+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:01.657879+0000 mon.a (mon.0) 633 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:01.658808+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:01.658808+0000 mon.a (mon.0) 634 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.005827+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.005827+0000 mon.a (mon.0) 635 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.006473+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.006473+0000 mon.a (mon.0) 636 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.025529+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.025529+0000 mon.a (mon.0) 637 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.027548+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.027548+0000 mon.a (mon.0) 638 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.034988+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.034988+0000 mon.a (mon.0) 639 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.foo.a", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.041456+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.041456+0000 mon.a (mon.0) 640 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.044512+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: audit 2026-03-08T23:03:02.044512+0000 mon.a (mon.0) 641 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: cephadm 2026-03-08T23:03:02.045190+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Deploying daemon rgw.foo.a on vm06 2026-03-08T23:03:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:02 vm11 bash[23232]: cephadm 2026-03-08T23:03:02.045190+0000 mgr.y (mgr.14150) 242 : cephadm [INF] Deploying daemon rgw.foo.a on vm06 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.926470+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.926470+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.934173+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.934173+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.939909+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.939909+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: cephadm 2026-03-08T23:03:02.940452+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: cephadm 2026-03-08T23:03:02.940452+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.944606+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.944606+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.950404+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.950404+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.962546+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:03 vm06 bash[20625]: audit 2026-03-08T23:03:02.962546+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.926470+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.926470+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.934173+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.934173+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.939909+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.939909+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: cephadm 2026-03-08T23:03:02.940452+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: cephadm 2026-03-08T23:03:02.940452+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.944606+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.944606+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.950404+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.950404+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.962546+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:03 vm06 bash[27746]: audit 2026-03-08T23:03:02.962546+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.926470+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.926470+0000 mon.a (mon.0) 642 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.934173+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.934173+0000 mon.a (mon.0) 643 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.939909+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.939909+0000 mon.a (mon.0) 644 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: cephadm 2026-03-08T23:03:02.940452+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: cephadm 2026-03-08T23:03:02.940452+0000 mgr.y (mgr.14150) 243 : cephadm [INF] Saving service rgw.foo.a spec with placement vm06=foo.a;count:1 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.944606+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.944606+0000 mon.a (mon.0) 645 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.950404+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.950404+0000 mon.a (mon.0) 646 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.962546+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:03 vm11 bash[23232]: audit 2026-03-08T23:03:02.962546+0000 mon.a (mon.0) 647 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: cluster 2026-03-08T23:03:03.494794+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: cluster 2026-03-08T23:03:03.494794+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: cluster 2026-03-08T23:03:03.967099+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: cluster 2026-03-08T23:03:03.967099+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: audit 2026-03-08T23:03:03.969174+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.106:0/4293354629' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: audit 2026-03-08T23:03:03.969174+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.106:0/4293354629' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: audit 2026-03-08T23:03:03.974576+0000 mon.a (mon.0) 649 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:05 vm06 bash[20625]: audit 2026-03-08T23:03:03.974576+0000 mon.a (mon.0) 649 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: cluster 2026-03-08T23:03:03.494794+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: cluster 2026-03-08T23:03:03.494794+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: cluster 2026-03-08T23:03:03.967099+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: cluster 2026-03-08T23:03:03.967099+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: audit 2026-03-08T23:03:03.969174+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.106:0/4293354629' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: audit 2026-03-08T23:03:03.969174+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.106:0/4293354629' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: audit 2026-03-08T23:03:03.974576+0000 mon.a (mon.0) 649 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:05 vm06 bash[27746]: audit 2026-03-08T23:03:03.974576+0000 mon.a (mon.0) 649 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: cluster 2026-03-08T23:03:03.494794+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: cluster 2026-03-08T23:03:03.494794+0000 mgr.y (mgr.14150) 244 : cluster [DBG] pgmap v219: 1 pgs: 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: cluster 2026-03-08T23:03:03.967099+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: cluster 2026-03-08T23:03:03.967099+0000 mon.a (mon.0) 648 : cluster [DBG] osdmap e54: 8 total, 8 up, 8 in 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: audit 2026-03-08T23:03:03.969174+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.106:0/4293354629' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: audit 2026-03-08T23:03:03.969174+0000 mon.c (mon.2) 13 : audit [INF] from='client.? 192.168.123.106:0/4293354629' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: audit 2026-03-08T23:03:03.974576+0000 mon.a (mon.0) 649 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:05 vm11 bash[23232]: audit 2026-03-08T23:03:03.974576+0000 mon.a (mon.0) 649 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch 2026-03-08T23:03:06.360 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: audit 2026-03-08T23:03:05.110309+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: audit 2026-03-08T23:03:05.110309+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: cluster 2026-03-08T23:03:05.242228+0000 mon.a (mon.0) 651 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: cluster 2026-03-08T23:03:05.242228+0000 mon.a (mon.0) 651 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: audit 2026-03-08T23:03:05.351600+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: audit 2026-03-08T23:03:05.351600+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: cluster 2026-03-08T23:03:06.122094+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: cluster 2026-03-08T23:03:06.122094+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: audit 2026-03-08T23:03:06.130092+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-08T23:03:06.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:06 vm06 bash[20625]: audit 2026-03-08T23:03:06.130092+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: audit 2026-03-08T23:03:05.110309+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: audit 2026-03-08T23:03:05.110309+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: cluster 2026-03-08T23:03:05.242228+0000 mon.a (mon.0) 651 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: cluster 2026-03-08T23:03:05.242228+0000 mon.a (mon.0) 651 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: audit 2026-03-08T23:03:05.351600+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: audit 2026-03-08T23:03:05.351600+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: cluster 2026-03-08T23:03:06.122094+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: cluster 2026-03-08T23:03:06.122094+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: audit 2026-03-08T23:03:06.130092+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-08T23:03:06.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:06 vm06 bash[27746]: audit 2026-03-08T23:03:06.130092+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: audit 2026-03-08T23:03:05.110309+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: audit 2026-03-08T23:03:05.110309+0000 mon.a (mon.0) 650 : audit [INF] from='client.? ' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: cluster 2026-03-08T23:03:05.242228+0000 mon.a (mon.0) 651 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: cluster 2026-03-08T23:03:05.242228+0000 mon.a (mon.0) 651 : cluster [DBG] osdmap e55: 8 total, 8 up, 8 in 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: audit 2026-03-08T23:03:05.351600+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: audit 2026-03-08T23:03:05.351600+0000 mon.a (mon.0) 652 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: cluster 2026-03-08T23:03:06.122094+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: cluster 2026-03-08T23:03:06.122094+0000 mon.a (mon.0) 653 : cluster [DBG] osdmap e56: 8 total, 8 up, 8 in 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: audit 2026-03-08T23:03:06.130092+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-08T23:03:06.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:06 vm11 bash[23232]: audit 2026-03-08T23:03:06.130092+0000 mon.a (mon.0) 654 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch 2026-03-08T23:03:07.129 INFO:teuthology.orchestra.run.vm11.stderr:pool 'datapool' created 2026-03-08T23:03:07.338 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- rbd pool init datapool 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: cluster 2026-03-08T23:03:05.495177+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v222: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: cluster 2026-03-08T23:03:05.495177+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v222: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:06.753744+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.111:0/2905990733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:06.753744+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.111:0/2905990733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:06.755156+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:06.755156+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:07.120983+0000 mon.a (mon.0) 656 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:07.120983+0000 mon.a (mon.0) 656 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:07.121054+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: audit 2026-03-08T23:03:07.121054+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: cluster 2026-03-08T23:03:07.132314+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:03:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:07 vm11 bash[23232]: cluster 2026-03-08T23:03:07.132314+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: cluster 2026-03-08T23:03:05.495177+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v222: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: cluster 2026-03-08T23:03:05.495177+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v222: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:06.753744+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.111:0/2905990733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:06.753744+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.111:0/2905990733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:06.755156+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:06.755156+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:07.120983+0000 mon.a (mon.0) 656 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:07.120983+0000 mon.a (mon.0) 656 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:07.121054+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: audit 2026-03-08T23:03:07.121054+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: cluster 2026-03-08T23:03:07.132314+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:03:07.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:07 vm06 bash[20625]: cluster 2026-03-08T23:03:07.132314+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: cluster 2026-03-08T23:03:05.495177+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v222: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: cluster 2026-03-08T23:03:05.495177+0000 mgr.y (mgr.14150) 245 : cluster [DBG] pgmap v222: 33 pgs: 32 unknown, 1 active+clean; 449 KiB data, 614 MiB used, 159 GiB / 160 GiB avail 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:06.753744+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.111:0/2905990733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:06.753744+0000 mon.b (mon.1) 27 : audit [INF] from='client.? 192.168.123.111:0/2905990733' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:06.755156+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:06.755156+0000 mon.a (mon.0) 655 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]: dispatch 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:07.120983+0000 mon.a (mon.0) 656 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:07.120983+0000 mon.a (mon.0) 656 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:07.121054+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: audit 2026-03-08T23:03:07.121054+0000 mon.a (mon.0) 657 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "datapool", "pg_num": 3, "pgp_num": 3, "pool_type": "replicated"}]': finished 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: cluster 2026-03-08T23:03:07.132314+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:03:07.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:07 vm06 bash[27746]: cluster 2026-03-08T23:03:07.132314+0000 mon.a (mon.0) 658 : cluster [DBG] osdmap e57: 8 total, 8 up, 8 in 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cluster 2026-03-08T23:03:07.495488+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v225: 68 pgs: 14 creating+peering, 32 unknown, 22 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cluster 2026-03-08T23:03:07.495488+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v225: 68 pgs: 14 creating+peering, 32 unknown, 22 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.112988+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.112988+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.118146+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.118146+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.118919+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.118919+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.119454+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.119454+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cephadm 2026-03-08T23:03:08.122188+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cephadm 2026-03-08T23:03:08.122188+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cluster 2026-03-08T23:03:08.138254+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cluster 2026-03-08T23:03:08.138254+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.143499+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: audit 2026-03-08T23:03:08.143499+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cluster 2026-03-08T23:03:08.342202+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:09 vm06 bash[20625]: cluster 2026-03-08T23:03:08.342202+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cluster 2026-03-08T23:03:07.495488+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v225: 68 pgs: 14 creating+peering, 32 unknown, 22 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cluster 2026-03-08T23:03:07.495488+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v225: 68 pgs: 14 creating+peering, 32 unknown, 22 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.112988+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.112988+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.118146+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.118146+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.118919+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.118919+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.119454+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.119454+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cephadm 2026-03-08T23:03:08.122188+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cephadm 2026-03-08T23:03:08.122188+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cluster 2026-03-08T23:03:08.138254+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cluster 2026-03-08T23:03:08.138254+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.143499+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: audit 2026-03-08T23:03:08.143499+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cluster 2026-03-08T23:03:08.342202+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-08T23:03:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:09 vm06 bash[27746]: cluster 2026-03-08T23:03:08.342202+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cluster 2026-03-08T23:03:07.495488+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v225: 68 pgs: 14 creating+peering, 32 unknown, 22 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cluster 2026-03-08T23:03:07.495488+0000 mgr.y (mgr.14150) 246 : cluster [DBG] pgmap v225: 68 pgs: 14 creating+peering, 32 unknown, 22 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.112988+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.112988+0000 mon.a (mon.0) 659 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.118146+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.118146+0000 mon.a (mon.0) 660 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.118919+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.118919+0000 mon.a (mon.0) 661 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.119454+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.119454+0000 mon.a (mon.0) 662 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cephadm 2026-03-08T23:03:08.122188+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cephadm 2026-03-08T23:03:08.122188+0000 mgr.y (mgr.14150) 247 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cluster 2026-03-08T23:03:08.138254+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cluster 2026-03-08T23:03:08.138254+0000 mon.a (mon.0) 663 : cluster [DBG] osdmap e58: 8 total, 8 up, 8 in 2026-03-08T23:03:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.143499+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-08T23:03:09.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: audit 2026-03-08T23:03:08.143499+0000 mon.a (mon.0) 664 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch 2026-03-08T23:03:09.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cluster 2026-03-08T23:03:08.342202+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-08T23:03:09.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:09 vm11 bash[23232]: cluster 2026-03-08T23:03:08.342202+0000 mon.a (mon.0) 665 : cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED) 2026-03-08T23:03:10.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:10 vm11 bash[23232]: audit 2026-03-08T23:03:09.242529+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-08T23:03:10.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:10 vm11 bash[23232]: audit 2026-03-08T23:03:09.242529+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-08T23:03:10.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:10 vm11 bash[23232]: cluster 2026-03-08T23:03:09.252071+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:03:10.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:10 vm11 bash[23232]: cluster 2026-03-08T23:03:09.252071+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:10 vm06 bash[20625]: audit 2026-03-08T23:03:09.242529+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:10 vm06 bash[20625]: audit 2026-03-08T23:03:09.242529+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:10 vm06 bash[20625]: cluster 2026-03-08T23:03:09.252071+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:10 vm06 bash[20625]: cluster 2026-03-08T23:03:09.252071+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:10 vm06 bash[27746]: audit 2026-03-08T23:03:09.242529+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:10 vm06 bash[27746]: audit 2026-03-08T23:03:09.242529+0000 mon.a (mon.0) 666 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished 2026-03-08T23:03:10.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:10 vm06 bash[27746]: cluster 2026-03-08T23:03:09.252071+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:03:10.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:10 vm06 bash[27746]: cluster 2026-03-08T23:03:09.252071+0000 mon.a (mon.0) 667 : cluster [DBG] osdmap e59: 8 total, 8 up, 8 in 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: cluster 2026-03-08T23:03:09.495965+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v228: 100 pgs: 21 creating+peering, 39 unknown, 40 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: cluster 2026-03-08T23:03:09.495965+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v228: 100 pgs: 21 creating+peering, 39 unknown, 40 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: cluster 2026-03-08T23:03:10.279020+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: cluster 2026-03-08T23:03:10.279020+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:10.288391+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:10.288391+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:10.290346+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:10.290346+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:10.308064+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:10.308064+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:11.280084+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:11.280084+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:11.280248+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:11.280248+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:11.295540+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: audit 2026-03-08T23:03:11.295540+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: cluster 2026-03-08T23:03:11.296772+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:11 vm06 bash[20625]: cluster 2026-03-08T23:03:11.296772+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: cluster 2026-03-08T23:03:09.495965+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v228: 100 pgs: 21 creating+peering, 39 unknown, 40 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: cluster 2026-03-08T23:03:09.495965+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v228: 100 pgs: 21 creating+peering, 39 unknown, 40 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: cluster 2026-03-08T23:03:10.279020+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: cluster 2026-03-08T23:03:10.279020+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:10.288391+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:10.288391+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:10.290346+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:10.290346+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:10.308064+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:10.308064+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:11.280084+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:11.280084+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:11.280248+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:11.280248+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:11.295540+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: audit 2026-03-08T23:03:11.295540+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: cluster 2026-03-08T23:03:11.296772+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:03:11.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:11 vm06 bash[27746]: cluster 2026-03-08T23:03:11.296772+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: cluster 2026-03-08T23:03:09.495965+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v228: 100 pgs: 21 creating+peering, 39 unknown, 40 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: cluster 2026-03-08T23:03:09.495965+0000 mgr.y (mgr.14150) 248 : cluster [DBG] pgmap v228: 100 pgs: 21 creating+peering, 39 unknown, 40 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 1.5 KiB/s wr, 3 op/s 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: cluster 2026-03-08T23:03:10.279020+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: cluster 2026-03-08T23:03:10.279020+0000 mon.a (mon.0) 668 : cluster [DBG] osdmap e60: 8 total, 8 up, 8 in 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:10.288391+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:10.288391+0000 mon.a (mon.0) 669 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:10.290346+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:10.290346+0000 mon.c (mon.2) 14 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:10.308064+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:10.308064+0000 mon.a (mon.0) 670 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:11.280084+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:11.280084+0000 mon.a (mon.0) 671 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:11.280248+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:11.280248+0000 mon.a (mon.0) 672 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:11.295540+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: audit 2026-03-08T23:03:11.295540+0000 mon.c (mon.2) 15 : audit [INF] from='client.? 192.168.123.106:0/3364098398' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: cluster 2026-03-08T23:03:11.296772+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:03:11.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:11 vm11 bash[23232]: cluster 2026-03-08T23:03:11.296772+0000 mon.a (mon.0) 673 : cluster [DBG] osdmap e61: 8 total, 8 up, 8 in 2026-03-08T23:03:11.972 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:12.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:11.305373+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:12.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:11.305373+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:12.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:11.305476+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:12.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:11.305476+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:12.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:12.111710+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.111:0/3940780114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:12.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:12.111710+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.111:0/3940780114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:12.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:12.113160+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:12.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:12 vm11 bash[23232]: audit 2026-03-08T23:03:12.113160+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:11.305373+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:11.305373+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:11.305476+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:11.305476+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:12.111710+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.111:0/3940780114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:12.111710+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.111:0/3940780114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:12.113160+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[20625]: audit 2026-03-08T23:03:12.113160+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:11.305373+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:11.305373+0000 mon.a (mon.0) 674 : audit [INF] from='client.? ' entity='mgr.y' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:11.305476+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:11.305476+0000 mon.a (mon.0) 675 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:12.111710+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.111:0/3940780114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:12.111710+0000 mon.b (mon.1) 28 : audit [INF] from='client.? 192.168.123.111:0/3940780114' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:12.113160+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:12 vm06 bash[27746]: audit 2026-03-08T23:03:12.113160+0000 mon.a (mon.0) 676 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]: dispatch 2026-03-08T23:03:13.030 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:03:12 vm06 bash[53236]: debug 2026-03-08T23:03:12.626+0000 7fb8bd4eb980 -1 LDAP not started since no server URIs were provided in the configuration. 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: cluster 2026-03-08T23:03:11.496317+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 28 creating+peering, 40 unknown, 64 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: cluster 2026-03-08T23:03:11.496317+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 28 creating+peering, 40 unknown, 64 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.517142+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.517142+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.517423+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.517423+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.517566+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.517566+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: cluster 2026-03-08T23:03:12.534342+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: cluster 2026-03-08T23:03:12.534342+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.892261+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.892261+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.911551+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.911551+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.937219+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.937219+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.960450+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:14.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:12.960450+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:13.348503+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:13.348503+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:13.349111+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:13 vm06 bash[20625]: audit 2026-03-08T23:03:13.349111+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: cluster 2026-03-08T23:03:11.496317+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 28 creating+peering, 40 unknown, 64 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: cluster 2026-03-08T23:03:11.496317+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 28 creating+peering, 40 unknown, 64 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.517142+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.517142+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.517423+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.517423+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.517566+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.517566+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: cluster 2026-03-08T23:03:12.534342+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: cluster 2026-03-08T23:03:12.534342+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.892261+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.892261+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.911551+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.911551+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.937219+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.937219+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.960450+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:12.960450+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:13.348503+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:13.348503+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:13.349111+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:14.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:13 vm06 bash[27746]: audit 2026-03-08T23:03:13.349111+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: cluster 2026-03-08T23:03:11.496317+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 28 creating+peering, 40 unknown, 64 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: cluster 2026-03-08T23:03:11.496317+0000 mgr.y (mgr.14150) 249 : cluster [DBG] pgmap v231: 132 pgs: 28 creating+peering, 40 unknown, 64 active+clean; 450 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 5 op/s 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.517142+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.517142+0000 mon.a (mon.0) 677 : audit [INF] from='client.? ' entity='mgr.y' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.517423+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.517423+0000 mon.a (mon.0) 678 : audit [INF] from='client.? 192.168.123.106:0/58648323' entity='client.rgw.foo.a' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.517566+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.517566+0000 mon.a (mon.0) 679 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "datapool","app": "rbd"}]': finished 2026-03-08T23:03:14.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: cluster 2026-03-08T23:03:12.534342+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: cluster 2026-03-08T23:03:12.534342+0000 mon.a (mon.0) 680 : cluster [DBG] osdmap e62: 8 total, 8 up, 8 in 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.892261+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.892261+0000 mon.a (mon.0) 681 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.911551+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.911551+0000 mon.a (mon.0) 682 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.937219+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.937219+0000 mon.a (mon.0) 683 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.960450+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:12.960450+0000 mon.a (mon.0) 684 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:13.348503+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:13.348503+0000 mon.a (mon.0) 685 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:13.349111+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:13 vm11 bash[23232]: audit 2026-03-08T23:03:13.349111+0000 mon.a (mon.0) 686 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:15.010 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply iscsi datapool admin admin --trusted_ip_list 192.168.123.111 --placement '1;vm11=iscsi.a' 2026-03-08T23:03:15.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cephadm 2026-03-08T23:03:13.351592+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:15.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cephadm 2026-03-08T23:03:13.351592+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:15.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.497269+0000 mgr.y (mgr.14150) 251 : cluster [DBG] pgmap v233: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 56 op/s 2026-03-08T23:03:15.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.497269+0000 mgr.y (mgr.14150) 251 : cluster [DBG] pgmap v233: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 56 op/s 2026-03-08T23:03:15.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.630806+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:03:15.010 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.630806+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:03:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: audit 2026-03-08T23:03:13.746817+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: audit 2026-03-08T23:03:13.746817+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.932753+0000 mon.a (mon.0) 689 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-08T23:03:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.932753+0000 mon.a (mon.0) 689 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-08T23:03:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.932781+0000 mon.a (mon.0) 690 : cluster [INF] Cluster is now healthy 2026-03-08T23:03:15.011 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:14 vm11 bash[23232]: cluster 2026-03-08T23:03:13.932781+0000 mon.a (mon.0) 690 : cluster [INF] Cluster is now healthy 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cephadm 2026-03-08T23:03:13.351592+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cephadm 2026-03-08T23:03:13.351592+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.497269+0000 mgr.y (mgr.14150) 251 : cluster [DBG] pgmap v233: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 56 op/s 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.497269+0000 mgr.y (mgr.14150) 251 : cluster [DBG] pgmap v233: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 56 op/s 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.630806+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.630806+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: audit 2026-03-08T23:03:13.746817+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: audit 2026-03-08T23:03:13.746817+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.932753+0000 mon.a (mon.0) 689 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.932753+0000 mon.a (mon.0) 689 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.932781+0000 mon.a (mon.0) 690 : cluster [INF] Cluster is now healthy 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:14 vm06 bash[20625]: cluster 2026-03-08T23:03:13.932781+0000 mon.a (mon.0) 690 : cluster [INF] Cluster is now healthy 2026-03-08T23:03:15.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cephadm 2026-03-08T23:03:13.351592+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cephadm 2026-03-08T23:03:13.351592+0000 mgr.y (mgr.14150) 250 : cephadm [INF] Checking dashboard <-> RGW credentials 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.497269+0000 mgr.y (mgr.14150) 251 : cluster [DBG] pgmap v233: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 56 op/s 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.497269+0000 mgr.y (mgr.14150) 251 : cluster [DBG] pgmap v233: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 3.3 KiB/s wr, 56 op/s 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.630806+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.630806+0000 mon.a (mon.0) 687 : cluster [DBG] osdmap e63: 8 total, 8 up, 8 in 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: audit 2026-03-08T23:03:13.746817+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: audit 2026-03-08T23:03:13.746817+0000 mon.a (mon.0) 688 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.932753+0000 mon.a (mon.0) 689 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.932753+0000 mon.a (mon.0) 689 : cluster [INF] Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled) 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.932781+0000 mon.a (mon.0) 690 : cluster [INF] Cluster is now healthy 2026-03-08T23:03:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:14 vm06 bash[27746]: cluster 2026-03-08T23:03:13.932781+0000 mon.a (mon.0) 690 : cluster [INF] Cluster is now healthy 2026-03-08T23:03:16.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:15 vm06 bash[20625]: cluster 2026-03-08T23:03:14.653998+0000 mon.a (mon.0) 691 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:03:16.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:15 vm06 bash[20625]: cluster 2026-03-08T23:03:14.653998+0000 mon.a (mon.0) 691 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:03:16.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:15 vm06 bash[27746]: cluster 2026-03-08T23:03:14.653998+0000 mon.a (mon.0) 691 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:03:16.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:15 vm06 bash[27746]: cluster 2026-03-08T23:03:14.653998+0000 mon.a (mon.0) 691 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:03:16.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:15 vm11 bash[23232]: cluster 2026-03-08T23:03:14.653998+0000 mon.a (mon.0) 691 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:03:16.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:15 vm11 bash[23232]: cluster 2026-03-08T23:03:14.653998+0000 mon.a (mon.0) 691 : cluster [DBG] osdmap e64: 8 total, 8 up, 8 in 2026-03-08T23:03:17.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:16 vm06 bash[20625]: cluster 2026-03-08T23:03:15.497601+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v236: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 53 op/s 2026-03-08T23:03:17.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:16 vm06 bash[20625]: cluster 2026-03-08T23:03:15.497601+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v236: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 53 op/s 2026-03-08T23:03:17.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:16 vm06 bash[27746]: cluster 2026-03-08T23:03:15.497601+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v236: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 53 op/s 2026-03-08T23:03:17.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:16 vm06 bash[27746]: cluster 2026-03-08T23:03:15.497601+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v236: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 53 op/s 2026-03-08T23:03:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:16 vm11 bash[23232]: cluster 2026-03-08T23:03:15.497601+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v236: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 53 op/s 2026-03-08T23:03:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:16 vm11 bash[23232]: cluster 2026-03-08T23:03:15.497601+0000 mgr.y (mgr.14150) 252 : cluster [DBG] pgmap v236: 132 pgs: 9 creating+peering, 123 active+clean; 452 KiB data, 215 MiB used, 160 GiB / 160 GiB avail; 24 KiB/s rd, 3.1 KiB/s wr, 53 op/s 2026-03-08T23:03:19.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:18 vm06 bash[20625]: cluster 2026-03-08T23:03:17.498082+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.8 KiB/s wr, 171 op/s 2026-03-08T23:03:19.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:18 vm06 bash[20625]: cluster 2026-03-08T23:03:17.498082+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.8 KiB/s wr, 171 op/s 2026-03-08T23:03:19.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:18 vm06 bash[27746]: cluster 2026-03-08T23:03:17.498082+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.8 KiB/s wr, 171 op/s 2026-03-08T23:03:19.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:18 vm06 bash[27746]: cluster 2026-03-08T23:03:17.498082+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.8 KiB/s wr, 171 op/s 2026-03-08T23:03:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:18 vm11 bash[23232]: cluster 2026-03-08T23:03:17.498082+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.8 KiB/s wr, 171 op/s 2026-03-08T23:03:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:18 vm11 bash[23232]: cluster 2026-03-08T23:03:17.498082+0000 mgr.y (mgr.14150) 253 : cluster [DBG] pgmap v237: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 72 KiB/s rd, 5.8 KiB/s wr, 171 op/s 2026-03-08T23:03:19.645 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:20.393 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled iscsi.datapool update... 2026-03-08T23:03:20.543 INFO:tasks.cephadm:Distributing iscsi-gateway.cfg... 2026-03-08T23:03:20.543 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T23:03:20.543 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-08T23:03:20.552 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T23:03:20.552 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/iscsi-gateway.cfg 2026-03-08T23:03:20.562 DEBUG:teuthology.orchestra.run.vm11:iscsi.iscsi.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@iscsi.iscsi.a.service 2026-03-08T23:03:20.606 INFO:tasks.cephadm:Adding prometheus.a on vm11 2026-03-08T23:03:20.606 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply prometheus '1;vm11=a' 2026-03-08T23:03:20.705 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:20 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: cluster 2026-03-08T23:03:19.498553+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 163 op/s 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: cluster 2026-03-08T23:03:19.498553+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 163 op/s 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.391197+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.391197+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.392881+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.392881+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.394328+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.394328+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.394723+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.394723+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.400751+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.400751+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.404612+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.404612+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.407054+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.407054+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.413356+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: audit 2026-03-08T23:03:20.413356+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: cluster 2026-03-08T23:03:20.977750+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 bash[23232]: cluster 2026-03-08T23:03:20.977750+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.309 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.309 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: cluster 2026-03-08T23:03:19.498553+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 163 op/s 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: cluster 2026-03-08T23:03:19.498553+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 163 op/s 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.391197+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.391197+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.392881+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.392881+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.394328+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.394328+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.394723+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.394723+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.400751+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.400751+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.404612+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.404612+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.407054+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.407054+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.413356+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: audit 2026-03-08T23:03:20.413356+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: cluster 2026-03-08T23:03:20.977750+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:21 vm06 bash[20625]: cluster 2026-03-08T23:03:20.977750+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:03:21.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: cluster 2026-03-08T23:03:19.498553+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 163 op/s 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: cluster 2026-03-08T23:03:19.498553+0000 mgr.y (mgr.14150) 254 : cluster [DBG] pgmap v238: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 163 op/s 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.391197+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.391197+0000 mon.a (mon.0) 692 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.392881+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.392881+0000 mon.a (mon.0) 693 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.394328+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.394328+0000 mon.a (mon.0) 694 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.394723+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.394723+0000 mon.a (mon.0) 695 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.400751+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.400751+0000 mon.a (mon.0) 696 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.404612+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.404612+0000 mon.a (mon.0) 697 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.407054+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.407054+0000 mon.a (mon.0) 698 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "auth get-or-create", "entity": "client.iscsi.iscsi.a", "caps": ["mon", "profile rbd, allow command \"osd blocklist\", allow command \"config-key get\" with \"key\" prefix \"iscsi/\"", "mgr", "allow command \"service status\"", "osd", "allow rwx"]}]': finished 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.413356+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: audit 2026-03-08T23:03:20.413356+0000 mon.a (mon.0) 699 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: cluster 2026-03-08T23:03:20.977750+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:03:21.531 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:21 vm06 bash[27746]: cluster 2026-03-08T23:03:20.977750+0000 mon.a (mon.0) 700 : cluster [DBG] osdmap e65: 8 total, 8 up, 8 in 2026-03-08T23:03:21.700 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.700 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.700 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.700 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.700 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.700 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.701 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:21.701 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 systemd[1]: Started Ceph iscsi.iscsi.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:03:22.307 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 bash[48986]: debug Started the configuration object watcher 2026-03-08T23:03:22.307 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 bash[48986]: debug Checking for config object changes every 1s 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:21 vm11 bash[48986]: debug Processing osd blocklist entries for this node 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: debug Reading the configuration object to update local LIO configuration 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: debug Configuration does not have an entry for this host(vm11.local) - nothing to define to LIO 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: * Serving Flask app 'rbd-target-api' (lazy loading) 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: * Environment: production 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: Use a production WSGI server instead. 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: * Debug mode: off 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: debug * Running on all addresses. 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: * Running on all addresses. 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: WARNING: This is a development server. Do not use it in a production deployment. 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: debug * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-08T23:03:22.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:22 vm11 bash[48986]: * Running on http://[::1]:5000/ (Press CTRL+C to quit) 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:20.357324+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24406 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.111", "placement": "1;vm11=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:20.357324+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24406 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.111", "placement": "1;vm11=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: cephadm 2026-03-08T23:03:20.358613+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm11=iscsi.a;count:1 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: cephadm 2026-03-08T23:03:20.358613+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm11=iscsi.a;count:1 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: cephadm 2026-03-08T23:03:20.413990+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm11 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: cephadm 2026-03-08T23:03:20.413990+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm11 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.495386+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.495386+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.509657+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.509657+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.523092+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.523092+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.549637+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.549637+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.567691+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:22.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:22 vm11 bash[23232]: audit 2026-03-08T23:03:21.567691+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:20.357324+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24406 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.111", "placement": "1;vm11=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:20.357324+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24406 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.111", "placement": "1;vm11=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: cephadm 2026-03-08T23:03:20.358613+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm11=iscsi.a;count:1 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: cephadm 2026-03-08T23:03:20.358613+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm11=iscsi.a;count:1 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: cephadm 2026-03-08T23:03:20.413990+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm11 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: cephadm 2026-03-08T23:03:20.413990+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm11 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.495386+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.495386+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.509657+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.509657+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.523092+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.523092+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.549637+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.549637+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.567691+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:22.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:22 vm06 bash[20625]: audit 2026-03-08T23:03:21.567691+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:20.357324+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24406 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.111", "placement": "1;vm11=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:20.357324+0000 mgr.y (mgr.14150) 255 : audit [DBG] from='client.24406 -' entity='client.admin' cmd=[{"prefix": "orch apply iscsi", "pool": "datapool", "api_user": "admin", "api_password": "admin", "trusted_ip_list": "192.168.123.111", "placement": "1;vm11=iscsi.a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: cephadm 2026-03-08T23:03:20.358613+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm11=iscsi.a;count:1 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: cephadm 2026-03-08T23:03:20.358613+0000 mgr.y (mgr.14150) 256 : cephadm [INF] Saving service iscsi.datapool spec with placement vm11=iscsi.a;count:1 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: cephadm 2026-03-08T23:03:20.413990+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm11 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: cephadm 2026-03-08T23:03:20.413990+0000 mgr.y (mgr.14150) 257 : cephadm [INF] Deploying daemon iscsi.iscsi.a on vm11 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.495386+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.495386+0000 mon.a (mon.0) 701 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.509657+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.509657+0000 mon.a (mon.0) 702 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.523092+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.523092+0000 mon.a (mon.0) 703 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.549637+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.549637+0000 mon.a (mon.0) 704 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.567691+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:22.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:22 vm06 bash[27746]: audit 2026-03-08T23:03:21.567691+0000 mon.a (mon.0) 705 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:23 vm11 bash[23232]: cluster 2026-03-08T23:03:21.499165+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 116 op/s 2026-03-08T23:03:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:23 vm11 bash[23232]: cluster 2026-03-08T23:03:21.499165+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 116 op/s 2026-03-08T23:03:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:23 vm11 bash[23232]: cephadm 2026-03-08T23:03:21.524352+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:03:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:23 vm11 bash[23232]: cephadm 2026-03-08T23:03:21.524352+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:03:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:23 vm11 bash[23232]: audit 2026-03-08T23:03:22.223073+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.111:0/3635933486' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:03:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:23 vm11 bash[23232]: audit 2026-03-08T23:03:22.223073+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.111:0/3635933486' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:23 vm06 bash[20625]: cluster 2026-03-08T23:03:21.499165+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 116 op/s 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:23 vm06 bash[20625]: cluster 2026-03-08T23:03:21.499165+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 116 op/s 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:23 vm06 bash[20625]: cephadm 2026-03-08T23:03:21.524352+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:23 vm06 bash[20625]: cephadm 2026-03-08T23:03:21.524352+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:23 vm06 bash[20625]: audit 2026-03-08T23:03:22.223073+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.111:0/3635933486' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:23 vm06 bash[20625]: audit 2026-03-08T23:03:22.223073+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.111:0/3635933486' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:23 vm06 bash[27746]: cluster 2026-03-08T23:03:21.499165+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 116 op/s 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:23 vm06 bash[27746]: cluster 2026-03-08T23:03:21.499165+0000 mgr.y (mgr.14150) 258 : cluster [DBG] pgmap v240: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 116 op/s 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:23 vm06 bash[27746]: cephadm 2026-03-08T23:03:21.524352+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:23 vm06 bash[27746]: cephadm 2026-03-08T23:03:21.524352+0000 mgr.y (mgr.14150) 259 : cephadm [INF] Checking pool "datapool" exists for service iscsi.datapool 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:23 vm06 bash[27746]: audit 2026-03-08T23:03:22.223073+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.111:0/3635933486' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:03:23.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:23 vm06 bash[27746]: audit 2026-03-08T23:03:22.223073+0000 mon.b (mon.1) 29 : audit [DBG] from='client.? 192.168.123.111:0/3635933486' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "osd blocklist ls"}]: dispatch 2026-03-08T23:03:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:25 vm11 bash[23232]: cluster 2026-03-08T23:03:23.499665+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 103 op/s 2026-03-08T23:03:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:25 vm11 bash[23232]: cluster 2026-03-08T23:03:23.499665+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 103 op/s 2026-03-08T23:03:25.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:25 vm11 bash[23232]: cluster 2026-03-08T23:03:24.062170+0000 mon.a (mon.0) 706 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-08T23:03:25.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:25 vm11 bash[23232]: cluster 2026-03-08T23:03:24.062170+0000 mon.a (mon.0) 706 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-08T23:03:25.308 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:25 vm06 bash[20625]: cluster 2026-03-08T23:03:23.499665+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 103 op/s 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:25 vm06 bash[20625]: cluster 2026-03-08T23:03:23.499665+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 103 op/s 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:25 vm06 bash[20625]: cluster 2026-03-08T23:03:24.062170+0000 mon.a (mon.0) 706 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:25 vm06 bash[20625]: cluster 2026-03-08T23:03:24.062170+0000 mon.a (mon.0) 706 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:25 vm06 bash[27746]: cluster 2026-03-08T23:03:23.499665+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 103 op/s 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:25 vm06 bash[27746]: cluster 2026-03-08T23:03:23.499665+0000 mgr.y (mgr.14150) 260 : cluster [DBG] pgmap v241: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 103 op/s 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:25 vm06 bash[27746]: cluster 2026-03-08T23:03:24.062170+0000 mon.a (mon.0) 706 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-08T23:03:25.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:25 vm06 bash[27746]: cluster 2026-03-08T23:03:24.062170+0000 mon.a (mon.0) 706 : cluster [DBG] mgrmap e16: y(active, since 6m), standbys: x 2026-03-08T23:03:25.653 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled prometheus update... 2026-03-08T23:03:25.714 DEBUG:teuthology.orchestra.run.vm11:prometheus.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@prometheus.a.service 2026-03-08T23:03:25.715 INFO:tasks.cephadm:Adding node-exporter.a on vm06 2026-03-08T23:03:25.715 INFO:tasks.cephadm:Adding node-exporter.b on vm11 2026-03-08T23:03:25.715 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply node-exporter '2;vm06=a;vm11=b' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: audit 2026-03-08T23:03:25.360195+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: audit 2026-03-08T23:03:25.360195+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: cluster 2026-03-08T23:03:25.500061+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 91 op/s 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: cluster 2026-03-08T23:03:25.500061+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 91 op/s 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: audit 2026-03-08T23:03:25.644047+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: audit 2026-03-08T23:03:25.644047+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: cephadm 2026-03-08T23:03:25.645019+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: cephadm 2026-03-08T23:03:25.645019+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: audit 2026-03-08T23:03:25.653115+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:26 vm06 bash[20625]: audit 2026-03-08T23:03:25.653115+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: audit 2026-03-08T23:03:25.360195+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: audit 2026-03-08T23:03:25.360195+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: cluster 2026-03-08T23:03:25.500061+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 91 op/s 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: cluster 2026-03-08T23:03:25.500061+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 91 op/s 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: audit 2026-03-08T23:03:25.644047+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: audit 2026-03-08T23:03:25.644047+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: cephadm 2026-03-08T23:03:25.645019+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-08T23:03:26.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: cephadm 2026-03-08T23:03:25.645019+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-08T23:03:26.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: audit 2026-03-08T23:03:25.653115+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:26 vm06 bash[27746]: audit 2026-03-08T23:03:25.653115+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: audit 2026-03-08T23:03:25.360195+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: audit 2026-03-08T23:03:25.360195+0000 mon.a (mon.0) 707 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: cluster 2026-03-08T23:03:25.500061+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 91 op/s 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: cluster 2026-03-08T23:03:25.500061+0000 mgr.y (mgr.14150) 261 : cluster [DBG] pgmap v242: 132 pgs: 132 active+clean; 454 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 91 op/s 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: audit 2026-03-08T23:03:25.644047+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: audit 2026-03-08T23:03:25.644047+0000 mgr.y (mgr.14150) 262 : audit [DBG] from='client.24436 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: cephadm 2026-03-08T23:03:25.645019+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: cephadm 2026-03-08T23:03:25.645019+0000 mgr.y (mgr.14150) 263 : cephadm [INF] Saving service prometheus spec with placement vm11=a;count:1 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: audit 2026-03-08T23:03:25.653115+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:26.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:26 vm11 bash[23232]: audit 2026-03-08T23:03:25.653115+0000 mon.a (mon.0) 708 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:27.307 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:27 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.606850+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.606850+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.611973+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.611973+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.613594+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.613594+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.614231+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.614231+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.621785+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: audit 2026-03-08T23:03:26.621785+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: cephadm 2026-03-08T23:03:26.779833+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:27 vm06 bash[20625]: cephadm 2026-03-08T23:03:26.779833+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.606850+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.606850+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.611973+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.611973+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.613594+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:28.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.613594+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:28.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.614231+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:28.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.614231+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:28.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.621785+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: audit 2026-03-08T23:03:26.621785+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: cephadm 2026-03-08T23:03:26.779833+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-08T23:03:28.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:28 vm06 bash[27746]: cephadm 2026-03-08T23:03:26.779833+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.606850+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.606850+0000 mon.a (mon.0) 709 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.611973+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.611973+0000 mon.a (mon.0) 710 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.613594+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.613594+0000 mon.a (mon.0) 711 : audit [DBG] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.614231+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.614231+0000 mon.a (mon.0) 712 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.621785+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: audit 2026-03-08T23:03:26.621785+0000 mon.a (mon.0) 713 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: cephadm 2026-03-08T23:03:26.779833+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-08T23:03:28.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:28 vm11 bash[23232]: cephadm 2026-03-08T23:03:26.779833+0000 mgr.y (mgr.14150) 264 : cephadm [INF] Deploying daemon prometheus.a on vm11 2026-03-08T23:03:29.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:29 vm06 bash[20625]: cluster 2026-03-08T23:03:27.500527+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 12 op/s 2026-03-08T23:03:29.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:29 vm06 bash[20625]: cluster 2026-03-08T23:03:27.500527+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 12 op/s 2026-03-08T23:03:29.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:29 vm06 bash[27746]: cluster 2026-03-08T23:03:27.500527+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 12 op/s 2026-03-08T23:03:29.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:29 vm06 bash[27746]: cluster 2026-03-08T23:03:27.500527+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 12 op/s 2026-03-08T23:03:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:29 vm11 bash[23232]: cluster 2026-03-08T23:03:27.500527+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 12 op/s 2026-03-08T23:03:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:29 vm11 bash[23232]: cluster 2026-03-08T23:03:27.500527+0000 mgr.y (mgr.14150) 265 : cluster [DBG] pgmap v243: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 12 op/s 2026-03-08T23:03:30.335 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:30.844 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled node-exporter update... 2026-03-08T23:03:31.019 DEBUG:teuthology.orchestra.run.vm06:node-exporter.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@node-exporter.a.service 2026-03-08T23:03:31.020 DEBUG:teuthology.orchestra.run.vm11:node-exporter.b> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@node-exporter.b.service 2026-03-08T23:03:31.021 INFO:tasks.cephadm:Adding alertmanager.a on vm06 2026-03-08T23:03:31.021 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply alertmanager '1;vm06=a' 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:31 vm06 bash[20625]: cluster 2026-03-08T23:03:29.500951+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:31 vm06 bash[20625]: cluster 2026-03-08T23:03:29.500951+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:31 vm06 bash[20625]: audit 2026-03-08T23:03:30.844111+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:31 vm06 bash[20625]: audit 2026-03-08T23:03:30.844111+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:31 vm06 bash[27746]: cluster 2026-03-08T23:03:29.500951+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:31 vm06 bash[27746]: cluster 2026-03-08T23:03:29.500951+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:31 vm06 bash[27746]: audit 2026-03-08T23:03:30.844111+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:31.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:31 vm06 bash[27746]: audit 2026-03-08T23:03:30.844111+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:31 vm11 bash[23232]: cluster 2026-03-08T23:03:29.500951+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-08T23:03:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:31 vm11 bash[23232]: cluster 2026-03-08T23:03:29.500951+0000 mgr.y (mgr.14150) 266 : cluster [DBG] pgmap v244: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 511 B/s rd, 102 B/s wr, 1 op/s 2026-03-08T23:03:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:31 vm11 bash[23232]: audit 2026-03-08T23:03:30.844111+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:31 vm11 bash[23232]: audit 2026-03-08T23:03:30.844111+0000 mon.a (mon.0) 714 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:32.307 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:32 vm06 bash[20625]: audit 2026-03-08T23:03:30.836624+0000 mgr.y (mgr.14150) 267 : audit [DBG] from='client.24442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm06=a;vm11=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:32 vm06 bash[20625]: audit 2026-03-08T23:03:30.836624+0000 mgr.y (mgr.14150) 267 : audit [DBG] from='client.24442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm06=a;vm11=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:32 vm06 bash[20625]: cephadm 2026-03-08T23:03:30.837567+0000 mgr.y (mgr.14150) 268 : cephadm [INF] Saving service node-exporter spec with placement vm06=a;vm11=b;count:2 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:32 vm06 bash[20625]: cephadm 2026-03-08T23:03:30.837567+0000 mgr.y (mgr.14150) 268 : cephadm [INF] Saving service node-exporter spec with placement vm06=a;vm11=b;count:2 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:32 vm06 bash[27746]: audit 2026-03-08T23:03:30.836624+0000 mgr.y (mgr.14150) 267 : audit [DBG] from='client.24442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm06=a;vm11=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:32 vm06 bash[27746]: audit 2026-03-08T23:03:30.836624+0000 mgr.y (mgr.14150) 267 : audit [DBG] from='client.24442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm06=a;vm11=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:32 vm06 bash[27746]: cephadm 2026-03-08T23:03:30.837567+0000 mgr.y (mgr.14150) 268 : cephadm [INF] Saving service node-exporter spec with placement vm06=a;vm11=b;count:2 2026-03-08T23:03:32.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:32 vm06 bash[27746]: cephadm 2026-03-08T23:03:30.837567+0000 mgr.y (mgr.14150) 268 : cephadm [INF] Saving service node-exporter spec with placement vm06=a;vm11=b;count:2 2026-03-08T23:03:32.783 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:32 vm11 bash[23232]: audit 2026-03-08T23:03:30.836624+0000 mgr.y (mgr.14150) 267 : audit [DBG] from='client.24442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm06=a;vm11=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:32.783 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:32 vm11 bash[23232]: audit 2026-03-08T23:03:30.836624+0000 mgr.y (mgr.14150) 267 : audit [DBG] from='client.24442 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "placement": "2;vm06=a;vm11=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:32.783 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:32 vm11 bash[23232]: cephadm 2026-03-08T23:03:30.837567+0000 mgr.y (mgr.14150) 268 : cephadm [INF] Saving service node-exporter spec with placement vm06=a;vm11=b;count:2 2026-03-08T23:03:32.783 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:32 vm11 bash[23232]: cephadm 2026-03-08T23:03:30.837567+0000 mgr.y (mgr.14150) 268 : cephadm [INF] Saving service node-exporter spec with placement vm06=a;vm11=b;count:2 2026-03-08T23:03:33.057 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:33 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.058 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:32 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:33.454 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 systemd[1]: Started Ceph prometheus.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.186Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.186Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.186Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm11 (none))" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.186Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.186Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.189Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.189Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.190Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.190Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.192Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.192Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=1.693µs 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.192Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.192Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.192Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=23.564µs wal_replay_duration=164.969µs wbl_replay_duration=169ns total_replay_duration=614.617µs 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.199Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.199Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.199Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.211Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=11.70012ms db_storage=752ns remote_storage=1.352µs web_handler=812ns query_engine=550ns scrape=2.572083ms scrape_sd=79.468µs notify=601ns notify_sd=621ns rules=8.780829ms tracing=5.681µs 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.211Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-08T23:03:33.455 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:33 vm11 bash[49943]: ts=2026-03-08T23:03:33.211Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: cluster 2026-03-08T23:03:31.501383+0000 mgr.y (mgr.14150) 269 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 486 B/s rd, 97 B/s wr, 1 op/s 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: cluster 2026-03-08T23:03:31.501383+0000 mgr.y (mgr.14150) 269 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 486 B/s rd, 97 B/s wr, 1 op/s 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:32.030747+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:32.030747+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.094720+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.094720+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.099112+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.099112+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.103561+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.103561+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.107626+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:33 vm06 bash[20625]: audit 2026-03-08T23:03:33.107626+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-08T23:03:33.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: cluster 2026-03-08T23:03:31.501383+0000 mgr.y (mgr.14150) 269 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 486 B/s rd, 97 B/s wr, 1 op/s 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: cluster 2026-03-08T23:03:31.501383+0000 mgr.y (mgr.14150) 269 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 486 B/s rd, 97 B/s wr, 1 op/s 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:32.030747+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:32.030747+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.094720+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.094720+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.099112+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.099112+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.103561+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.103561+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.107626+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-08T23:03:33.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:33 vm06 bash[27746]: audit 2026-03-08T23:03:33.107626+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: cluster 2026-03-08T23:03:31.501383+0000 mgr.y (mgr.14150) 269 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 486 B/s rd, 97 B/s wr, 1 op/s 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: cluster 2026-03-08T23:03:31.501383+0000 mgr.y (mgr.14150) 269 : cluster [DBG] pgmap v245: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 486 B/s rd, 97 B/s wr, 1 op/s 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:32.030747+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:32.030747+0000 mgr.y (mgr.14150) 270 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.094720+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.094720+0000 mon.a (mon.0) 715 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.099112+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.099112+0000 mon.a (mon.0) 716 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.103561+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.103561+0000 mon.a (mon.0) 717 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.107626+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-08T23:03:33.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:33 vm11 bash[23232]: audit 2026-03-08T23:03:33.107626+0000 mon.a (mon.0) 718 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-08T23:03:34.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:34 vm06 bash[20883]: ignoring --setuser ceph since I am not root 2026-03-08T23:03:34.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:34 vm06 bash[20883]: ignoring --setgroup ceph since I am not root 2026-03-08T23:03:34.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:34 vm06 bash[20883]: debug 2026-03-08T23:03:34.230+0000 7f759bdbf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:03:34.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:34 vm06 bash[20883]: debug 2026-03-08T23:03:34.266+0000 7f759bdbf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:03:34.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:34 vm06 bash[20883]: debug 2026-03-08T23:03:34.382+0000 7f759bdbf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: ignoring --setuser ceph since I am not root 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: ignoring --setgroup ceph since I am not root 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: debug 2026-03-08T23:03:34.174+0000 7feecdb7b640 1 -- 192.168.123.111:0/2461779117 <== mon.2 v2:192.168.123.106:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x55ad2f7884e0 con 0x55ad2f766800 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: debug 2026-03-08T23:03:34.174+0000 7feecdb7b640 1 -- 192.168.123.111:0/2461779117 <== mon.2 v2:192.168.123.106:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55ad2f7654a0 con 0x55ad2f766800 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: debug 2026-03-08T23:03:34.238+0000 7feed03d8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: debug 2026-03-08T23:03:34.270+0000 7feed03d8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:03:34.557 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: debug 2026-03-08T23:03:34.390+0000 7feed03d8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:03:35.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:34 vm06 bash[20883]: debug 2026-03-08T23:03:34.682+0000 7f759bdbf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:03:35.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:34 vm11 bash[24047]: debug 2026-03-08T23:03:34.682+0000 7feed03d8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:03:35.373 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:35 vm06 bash[20625]: audit 2026-03-08T23:03:34.119445+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:35 vm06 bash[20625]: audit 2026-03-08T23:03:34.119445+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:35 vm06 bash[20625]: cluster 2026-03-08T23:03:34.125230+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:35 vm06 bash[20625]: cluster 2026-03-08T23:03:34.125230+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.150+0000 7f759bdbf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.238+0000 7f759bdbf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: from numpy import show_config as show_numpy_config 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:35 vm06 bash[27746]: audit 2026-03-08T23:03:34.119445+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:35 vm06 bash[27746]: audit 2026-03-08T23:03:34.119445+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:35 vm06 bash[27746]: cluster 2026-03-08T23:03:34.125230+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-08T23:03:35.374 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:35 vm06 bash[27746]: cluster 2026-03-08T23:03:34.125230+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-08T23:03:35.390 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:35 vm11 bash[23232]: audit 2026-03-08T23:03:34.119445+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-08T23:03:35.390 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:35 vm11 bash[23232]: audit 2026-03-08T23:03:34.119445+0000 mon.a (mon.0) 719 : audit [INF] from='mgr.14150 192.168.123.106:0/2664587391' entity='mgr.y' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-08T23:03:35.390 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:35 vm11 bash[23232]: cluster 2026-03-08T23:03:34.125230+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-08T23:03:35.390 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:35 vm11 bash[23232]: cluster 2026-03-08T23:03:34.125230+0000 mon.a (mon.0) 720 : cluster [DBG] mgrmap e17: y(active, since 6m), standbys: x 2026-03-08T23:03:35.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.170+0000 7feed03d8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:03:35.391 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.262+0000 7feed03d8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:03:35.642 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.370+0000 7f759bdbf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:03:35.643 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.510+0000 7f759bdbf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:03:35.643 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.550+0000 7f759bdbf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:03:35.643 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.594+0000 7f759bdbf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: from numpy import show_config as show_numpy_config 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.390+0000 7feed03d8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.538+0000 7feed03d8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.590+0000 7feed03d8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:03:35.682 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.634+0000 7feed03d8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:03:36.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.638+0000 7f759bdbf140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:03:36.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:35 vm06 bash[20883]: debug 2026-03-08T23:03:35.690+0000 7f759bdbf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:03:36.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.678+0000 7feed03d8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:03:36.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:35 vm11 bash[24047]: debug 2026-03-08T23:03:35.730+0000 7feed03d8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:03:36.468 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.206+0000 7f759bdbf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:03:36.468 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.246+0000 7f759bdbf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:03:36.468 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.282+0000 7f759bdbf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:03:36.468 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.422+0000 7f759bdbf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:03:36.551 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.274+0000 7feed03d8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:03:36.551 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.314+0000 7feed03d8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:03:36.551 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.350+0000 7feed03d8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:03:36.551 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.498+0000 7feed03d8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:03:36.719 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:36.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.462+0000 7f759bdbf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:03:36.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.510+0000 7f759bdbf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:03:36.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.646+0000 7f759bdbf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:03:36.807 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.546+0000 7feed03d8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:03:36.812 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.594+0000 7feed03d8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:03:36.812 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.742+0000 7feed03d8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:03:37.087 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:36 vm06 bash[20883]: debug 2026-03-08T23:03:36.822+0000 7f759bdbf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:03:37.087 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: debug 2026-03-08T23:03:37.038+0000 7f759bdbf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:03:37.199 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:36 vm11 bash[24047]: debug 2026-03-08T23:03:36.938+0000 7feed03d8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:03:37.478 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: debug 2026-03-08T23:03:37.194+0000 7feed03d8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:03:37.478 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: debug 2026-03-08T23:03:37.234+0000 7feed03d8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:03:37.478 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: debug 2026-03-08T23:03:37.302+0000 7feed03d8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:03:37.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: debug 2026-03-08T23:03:37.082+0000 7f759bdbf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:03:37.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: debug 2026-03-08T23:03:37.134+0000 7f759bdbf140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:03:37.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: debug 2026-03-08T23:03:37.330+0000 7f759bdbf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.585987+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.585987+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.586374+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.586374+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.608094+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.608094+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.608870+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0226182s), standbys: x 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.608870+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0226182s), standbys: x 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.618806+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.618806+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.618991+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.618991+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.619135+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.619135+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.620672+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.620672+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.621184+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.621184+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:03:37.735 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.622259+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.622259+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.622479+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.622479+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.623482+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.623482+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.624115+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.624115+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.624686+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.624686+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.625275+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.625275+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.625837+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.625837+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.626341+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.626341+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.627150+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.627150+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.627712+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.627712+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.628400+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: audit 2026-03-08T23:03:37.628400+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.636752+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:37 vm11 bash[23232]: cluster 2026-03-08T23:03:37.636752+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-08T23:03:37.736 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: debug 2026-03-08T23:03:37.474+0000 7feed03d8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.585987+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.585987+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.586374+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.586374+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.608094+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.608094+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.608870+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0226182s), standbys: x 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.608870+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0226182s), standbys: x 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.618806+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:03:37.838 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.618806+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.618991+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.618991+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.619135+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.619135+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.620672+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.620672+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.621184+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.621184+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.622259+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.622259+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.622479+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.622479+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.623482+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.623482+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.624115+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.624115+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.624686+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.624686+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.625275+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.625275+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.625837+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.625837+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.626341+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.626341+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.627150+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.627150+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.627712+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.627712+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.628400+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: audit 2026-03-08T23:03:37.628400+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.636752+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:37 vm06 bash[20625]: cluster 2026-03-08T23:03:37.636752+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: debug 2026-03-08T23:03:37.578+0000 7f759bdbf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: [08/Mar/2026:23:03:37] ENGINE Bus STARTING 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: CherryPy Checker: 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: The Application mounted at '' has an empty config. 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: [08/Mar/2026:23:03:37] ENGINE Serving on http://:::9283 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.585987+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.585987+0000 mon.a (mon.0) 721 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.586374+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.586374+0000 mon.a (mon.0) 722 : cluster [INF] Activating manager daemon y 2026-03-08T23:03:37.839 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.608094+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.608094+0000 mon.a (mon.0) 723 : cluster [DBG] osdmap e66: 8 total, 8 up, 8 in 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.608870+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0226182s), standbys: x 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.608870+0000 mon.a (mon.0) 724 : cluster [DBG] mgrmap e18: y(active, starting, since 0.0226182s), standbys: x 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.618806+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.618806+0000 mon.c (mon.2) 16 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.618991+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.618991+0000 mon.c (mon.2) 17 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.619135+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.619135+0000 mon.c (mon.2) 18 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.620672+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.620672+0000 mon.c (mon.2) 19 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.621184+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.621184+0000 mon.c (mon.2) 20 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.622259+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.622259+0000 mon.c (mon.2) 21 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.622479+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.622479+0000 mon.c (mon.2) 22 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.623482+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.623482+0000 mon.c (mon.2) 23 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.624115+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.624115+0000 mon.c (mon.2) 24 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.624686+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.624686+0000 mon.c (mon.2) 25 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.625275+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.625275+0000 mon.c (mon.2) 26 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.625837+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.625837+0000 mon.c (mon.2) 27 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.626341+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.626341+0000 mon.c (mon.2) 28 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.627150+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.627150+0000 mon.c (mon.2) 29 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.627712+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.627712+0000 mon.c (mon.2) 30 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.628400+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: audit 2026-03-08T23:03:37.628400+0000 mon.c (mon.2) 31 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.636752+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-08T23:03:37.840 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:37 vm06 bash[27746]: cluster 2026-03-08T23:03:37.636752+0000 mon.a (mon.0) 725 : cluster [INF] Manager daemon y is now available 2026-03-08T23:03:38.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: debug 2026-03-08T23:03:37.754+0000 7feed03d8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:03:38.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: [08/Mar/2026:23:03:37] ENGINE Bus STARTING 2026-03-08T23:03:38.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: CherryPy Checker: 2026-03-08T23:03:38.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: The Application mounted at '' has an empty config. 2026-03-08T23:03:38.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: [08/Mar/2026:23:03:37] ENGINE Serving on http://:::9283 2026-03-08T23:03:38.057 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:37 vm11 bash[24047]: [08/Mar/2026:23:03:37] ENGINE Bus STARTED 2026-03-08T23:03:38.279 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:37 vm06 bash[20883]: [08/Mar/2026:23:03:37] ENGINE Bus STARTED 2026-03-08T23:03:39.701 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled alertmanager update... 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.660451+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.660451+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.670929+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.670929+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.691898+0000 mon.c (mon.2) 33 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.691898+0000 mon.c (mon.2) 33 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.692198+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.692198+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.695872+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.695872+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: cluster 2026-03-08T23:03:37.762324+0000 mon.a (mon.0) 728 : cluster [DBG] Standby manager daemon x restarted 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: cluster 2026-03-08T23:03:37.762324+0000 mon.a (mon.0) 728 : cluster [DBG] Standby manager daemon x restarted 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: cluster 2026-03-08T23:03:37.762682+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: cluster 2026-03-08T23:03:37.762682+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:03:39.993 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.763037+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.763037+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.763309+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.763309+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.765408+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.765408+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.766483+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.766483+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.768235+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.768235+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.769769+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:03:39.994 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:39 vm11 bash[23232]: audit 2026-03-08T23:03:37.769769+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:03:40.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.660451+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.660451+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.670929+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.670929+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.691898+0000 mon.c (mon.2) 33 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.691898+0000 mon.c (mon.2) 33 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.692198+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.692198+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.695872+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.695872+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: cluster 2026-03-08T23:03:37.762324+0000 mon.a (mon.0) 728 : cluster [DBG] Standby manager daemon x restarted 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: cluster 2026-03-08T23:03:37.762324+0000 mon.a (mon.0) 728 : cluster [DBG] Standby manager daemon x restarted 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: cluster 2026-03-08T23:03:37.762682+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: cluster 2026-03-08T23:03:37.762682+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.763037+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.763037+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.763309+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.054 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.763309+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.765408+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.765408+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.766483+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.766483+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.768235+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.768235+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.769769+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:39 vm06 bash[20625]: audit 2026-03-08T23:03:37.769769+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.660451+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.660451+0000 mon.c (mon.2) 32 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.670929+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.670929+0000 mon.a (mon.0) 726 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.691898+0000 mon.c (mon.2) 33 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.691898+0000 mon.c (mon.2) 33 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.692198+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.692198+0000 mon.a (mon.0) 727 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.695872+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.695872+0000 mon.c (mon.2) 34 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: cluster 2026-03-08T23:03:37.762324+0000 mon.a (mon.0) 728 : cluster [DBG] Standby manager daemon x restarted 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: cluster 2026-03-08T23:03:37.762324+0000 mon.a (mon.0) 728 : cluster [DBG] Standby manager daemon x restarted 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: cluster 2026-03-08T23:03:37.762682+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: cluster 2026-03-08T23:03:37.762682+0000 mon.a (mon.0) 729 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.763037+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.763037+0000 mon.c (mon.2) 35 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.763309+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.763309+0000 mon.a (mon.0) 730 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.765408+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.765408+0000 mon.b (mon.1) 30 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.766483+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.766483+0000 mon.b (mon.1) 31 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.768235+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.768235+0000 mon.b (mon.1) 32 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.769769+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:03:40.055 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:39 vm06 bash[27746]: audit 2026-03-08T23:03:37.769769+0000 mon.b (mon.1) 33 : audit [DBG] from='mgr.? 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:03:40.174 DEBUG:teuthology.orchestra.run.vm06:alertmanager.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@alertmanager.a.service 2026-03-08T23:03:40.175 INFO:tasks.cephadm:Adding grafana.a on vm11 2026-03-08T23:03:40.175 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph orch apply grafana '1;vm11=a' 2026-03-08T23:03:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:03:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.635750+0000 mgr.y (mgr.24419) 1 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTING 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.635750+0000 mgr.y (mgr.24419) 1 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTING 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.744848+0000 mgr.y (mgr.24419) 2 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.744848+0000 mgr.y (mgr.24419) 2 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.745304+0000 mgr.y (mgr.24419) 3 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Client ('192.168.123.106', 52526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.745304+0000 mgr.y (mgr.24419) 3 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Client ('192.168.123.106', 52526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.846340+0000 mgr.y (mgr.24419) 4 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.846340+0000 mgr.y (mgr.24419) 4 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.846834+0000 mgr.y (mgr.24419) 5 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTED 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:38.846834+0000 mgr.y (mgr.24419) 5 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTED 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cluster 2026-03-08T23:03:39.659504+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cluster 2026-03-08T23:03:39.659504+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: audit 2026-03-08T23:03:39.665484+0000 mgr.y (mgr.24419) 6 : audit [DBG] from='client.24454 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: audit 2026-03-08T23:03:39.665484+0000 mgr.y (mgr.24419) 6 : audit [DBG] from='client.24454 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:39.668020+0000 mgr.y (mgr.24419) 7 : cephadm [INF] Saving service alertmanager spec with placement vm06=a;count:1 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cephadm 2026-03-08T23:03:39.668020+0000 mgr.y (mgr.24419) 7 : cephadm [INF] Saving service alertmanager spec with placement vm06=a;count:1 2026-03-08T23:03:41.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cluster 2026-03-08T23:03:39.689033+0000 mgr.y (mgr.24419) 8 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: cluster 2026-03-08T23:03:39.689033+0000 mgr.y (mgr.24419) 8 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: audit 2026-03-08T23:03:39.698178+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:41 vm06 bash[20625]: audit 2026-03-08T23:03:39.698178+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.635750+0000 mgr.y (mgr.24419) 1 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTING 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.635750+0000 mgr.y (mgr.24419) 1 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTING 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.744848+0000 mgr.y (mgr.24419) 2 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.744848+0000 mgr.y (mgr.24419) 2 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.745304+0000 mgr.y (mgr.24419) 3 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Client ('192.168.123.106', 52526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.745304+0000 mgr.y (mgr.24419) 3 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Client ('192.168.123.106', 52526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.846340+0000 mgr.y (mgr.24419) 4 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.846340+0000 mgr.y (mgr.24419) 4 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.846834+0000 mgr.y (mgr.24419) 5 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTED 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:38.846834+0000 mgr.y (mgr.24419) 5 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTED 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cluster 2026-03-08T23:03:39.659504+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cluster 2026-03-08T23:03:39.659504+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: audit 2026-03-08T23:03:39.665484+0000 mgr.y (mgr.24419) 6 : audit [DBG] from='client.24454 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: audit 2026-03-08T23:03:39.665484+0000 mgr.y (mgr.24419) 6 : audit [DBG] from='client.24454 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:39.668020+0000 mgr.y (mgr.24419) 7 : cephadm [INF] Saving service alertmanager spec with placement vm06=a;count:1 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cephadm 2026-03-08T23:03:39.668020+0000 mgr.y (mgr.24419) 7 : cephadm [INF] Saving service alertmanager spec with placement vm06=a;count:1 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cluster 2026-03-08T23:03:39.689033+0000 mgr.y (mgr.24419) 8 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: cluster 2026-03-08T23:03:39.689033+0000 mgr.y (mgr.24419) 8 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: audit 2026-03-08T23:03:39.698178+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:41.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:41 vm06 bash[27746]: audit 2026-03-08T23:03:39.698178+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.635750+0000 mgr.y (mgr.24419) 1 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTING 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.635750+0000 mgr.y (mgr.24419) 1 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTING 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.744848+0000 mgr.y (mgr.24419) 2 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.744848+0000 mgr.y (mgr.24419) 2 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.745304+0000 mgr.y (mgr.24419) 3 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Client ('192.168.123.106', 52526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.745304+0000 mgr.y (mgr.24419) 3 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Client ('192.168.123.106', 52526) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.846340+0000 mgr.y (mgr.24419) 4 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.846340+0000 mgr.y (mgr.24419) 4 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.846834+0000 mgr.y (mgr.24419) 5 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTED 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:38.846834+0000 mgr.y (mgr.24419) 5 : cephadm [INF] [08/Mar/2026:23:03:38] ENGINE Bus STARTED 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cluster 2026-03-08T23:03:39.659504+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cluster 2026-03-08T23:03:39.659504+0000 mon.a (mon.0) 731 : cluster [DBG] mgrmap e19: y(active, since 2s), standbys: x 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: audit 2026-03-08T23:03:39.665484+0000 mgr.y (mgr.24419) 6 : audit [DBG] from='client.24454 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: audit 2026-03-08T23:03:39.665484+0000 mgr.y (mgr.24419) 6 : audit [DBG] from='client.24454 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "placement": "1;vm06=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:39.668020+0000 mgr.y (mgr.24419) 7 : cephadm [INF] Saving service alertmanager spec with placement vm06=a;count:1 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cephadm 2026-03-08T23:03:39.668020+0000 mgr.y (mgr.24419) 7 : cephadm [INF] Saving service alertmanager spec with placement vm06=a;count:1 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cluster 2026-03-08T23:03:39.689033+0000 mgr.y (mgr.24419) 8 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: cluster 2026-03-08T23:03:39.689033+0000 mgr.y (mgr.24419) 8 : cluster [DBG] pgmap v3: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: audit 2026-03-08T23:03:39.698178+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:41.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:41 vm11 bash[23232]: audit 2026-03-08T23:03:39.698178+0000 mon.a (mon.0) 732 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:42.307 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:42 vm06 bash[20625]: cluster 2026-03-08T23:03:40.999475+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:42 vm06 bash[20625]: cluster 2026-03-08T23:03:40.999475+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:42 vm06 bash[20625]: cluster 2026-03-08T23:03:41.624479+0000 mgr.y (mgr.24419) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:42 vm06 bash[20625]: cluster 2026-03-08T23:03:41.624479+0000 mgr.y (mgr.24419) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:42 vm06 bash[20625]: audit 2026-03-08T23:03:42.034519+0000 mgr.y (mgr.24419) 10 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:42 vm06 bash[20625]: audit 2026-03-08T23:03:42.034519+0000 mgr.y (mgr.24419) 10 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:42 vm06 bash[27746]: cluster 2026-03-08T23:03:40.999475+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:42 vm06 bash[27746]: cluster 2026-03-08T23:03:40.999475+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:42 vm06 bash[27746]: cluster 2026-03-08T23:03:41.624479+0000 mgr.y (mgr.24419) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:42 vm06 bash[27746]: cluster 2026-03-08T23:03:41.624479+0000 mgr.y (mgr.24419) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:42 vm06 bash[27746]: audit 2026-03-08T23:03:42.034519+0000 mgr.y (mgr.24419) 10 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:43.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:42 vm06 bash[27746]: audit 2026-03-08T23:03:42.034519+0000 mgr.y (mgr.24419) 10 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:42 vm11 bash[23232]: cluster 2026-03-08T23:03:40.999475+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-08T23:03:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:42 vm11 bash[23232]: cluster 2026-03-08T23:03:40.999475+0000 mon.a (mon.0) 733 : cluster [DBG] mgrmap e20: y(active, since 3s), standbys: x 2026-03-08T23:03:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:42 vm11 bash[23232]: cluster 2026-03-08T23:03:41.624479+0000 mgr.y (mgr.24419) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:42 vm11 bash[23232]: cluster 2026-03-08T23:03:41.624479+0000 mgr.y (mgr.24419) 9 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:42 vm11 bash[23232]: audit 2026-03-08T23:03:42.034519+0000 mgr.y (mgr.24419) 10 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:43.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:42 vm11 bash[23232]: audit 2026-03-08T23:03:42.034519+0000 mgr.y (mgr.24419) 10 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:43.905 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:43.930 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: cluster 2026-03-08T23:03:42.588835+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e21: y(active, since 5s), standbys: x 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: cluster 2026-03-08T23:03:42.588835+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e21: y(active, since 5s), standbys: x 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.222449+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.222449+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.244518+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.244518+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.327345+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.327345+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.346742+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:43.931 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:43 vm11 bash[23232]: audit 2026-03-08T23:03:43.346742+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: cluster 2026-03-08T23:03:42.588835+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e21: y(active, since 5s), standbys: x 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: cluster 2026-03-08T23:03:42.588835+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e21: y(active, since 5s), standbys: x 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.222449+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.222449+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.244518+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.244518+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.327345+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.327345+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.346742+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:43 vm06 bash[20625]: audit 2026-03-08T23:03:43.346742+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: cluster 2026-03-08T23:03:42.588835+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e21: y(active, since 5s), standbys: x 2026-03-08T23:03:44.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: cluster 2026-03-08T23:03:42.588835+0000 mon.a (mon.0) 734 : cluster [DBG] mgrmap e21: y(active, since 5s), standbys: x 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.222449+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.222449+0000 mon.a (mon.0) 735 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.244518+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.244518+0000 mon.a (mon.0) 736 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.327345+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.327345+0000 mon.a (mon.0) 737 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.346742+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:43 vm06 bash[27746]: audit 2026-03-08T23:03:43.346742+0000 mon.a (mon.0) 738 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:44.264 INFO:teuthology.orchestra.run.vm11.stdout:Scheduled grafana update... 2026-03-08T23:03:44.337 DEBUG:teuthology.orchestra.run.vm11:grafana.a> sudo journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@grafana.a.service 2026-03-08T23:03:44.338 INFO:tasks.cephadm:Setting up client nodes... 2026-03-08T23:03:44.338 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph auth get-or-create client.0 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-08T23:03:44.825 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.825 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.826 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.826 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.826 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.826 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.826 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:44.826 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cluster 2026-03-08T23:03:43.624740+0000 mgr.y (mgr.24419) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cluster 2026-03-08T23:03:43.624740+0000 mgr.y (mgr.24419) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.919159+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.919159+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.939314+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.939314+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.946345+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.946345+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.946816+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:43.946816+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.039858+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.039858+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.048010+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.048010+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.052421+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.052421+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.052705+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.052705+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.053914+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.053914+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.054734+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.054734+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.055675+0000 mgr.y (mgr.24419) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.055675+0000 mgr.y (mgr.24419) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.055940+0000 mgr.y (mgr.24419) 13 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.055940+0000 mgr.y (mgr.24419) 13 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.094806+0000 mgr.y (mgr.24419) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.094806+0000 mgr.y (mgr.24419) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.091 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.096873+0000 mgr.y (mgr.24419) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.096873+0000 mgr.y (mgr.24419) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.133990+0000 mgr.y (mgr.24419) 16 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.133990+0000 mgr.y (mgr.24419) 16 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.145044+0000 mgr.y (mgr.24419) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.145044+0000 mgr.y (mgr.24419) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.169026+0000 mgr.y (mgr.24419) 18 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.169026+0000 mgr.y (mgr.24419) 18 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.215962+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.215962+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.236116+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.236116+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.248432+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.248432+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.256286+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.256286+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.272807+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.272807+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.295058+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 bash[20625]: audit 2026-03-08T23:03:44.295058+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.092 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.092 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cluster 2026-03-08T23:03:43.624740+0000 mgr.y (mgr.24419) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cluster 2026-03-08T23:03:43.624740+0000 mgr.y (mgr.24419) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.919159+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.919159+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.939314+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.939314+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.946345+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.092 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.946345+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.946816+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:43.946816+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.039858+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.039858+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.048010+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.048010+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.052421+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.052421+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.052705+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.052705+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.053914+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.053914+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.054734+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.054734+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.055675+0000 mgr.y (mgr.24419) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.055675+0000 mgr.y (mgr.24419) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.055940+0000 mgr.y (mgr.24419) 13 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.055940+0000 mgr.y (mgr.24419) 13 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.094806+0000 mgr.y (mgr.24419) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.094806+0000 mgr.y (mgr.24419) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.096873+0000 mgr.y (mgr.24419) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.096873+0000 mgr.y (mgr.24419) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.133990+0000 mgr.y (mgr.24419) 16 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.133990+0000 mgr.y (mgr.24419) 16 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.145044+0000 mgr.y (mgr.24419) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.145044+0000 mgr.y (mgr.24419) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.169026+0000 mgr.y (mgr.24419) 18 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.169026+0000 mgr.y (mgr.24419) 18 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.215962+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.215962+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.236116+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.236116+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.248432+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.248432+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.256286+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.256286+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.272807+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.272807+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.295058+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 bash[27746]: audit 2026-03-08T23:03:44.295058+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.093 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.093 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.093 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:45 vm06 systemd[1]: Started Ceph node-exporter.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:03:45.093 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.094 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:03:44 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cluster 2026-03-08T23:03:43.624740+0000 mgr.y (mgr.24419) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cluster 2026-03-08T23:03:43.624740+0000 mgr.y (mgr.24419) 11 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.919159+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.919159+0000 mon.a (mon.0) 739 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.939314+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.939314+0000 mon.a (mon.0) 740 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.946345+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.946345+0000 mon.c (mon.2) 36 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.946816+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:43.946816+0000 mon.a (mon.0) 741 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.039858+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.039858+0000 mon.a (mon.0) 742 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.048010+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.048010+0000 mon.a (mon.0) 743 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.052421+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.052421+0000 mon.c (mon.2) 37 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.052705+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.052705+0000 mon.a (mon.0) 744 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.053914+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.053914+0000 mon.c (mon.2) 38 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.054734+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.054734+0000 mon.c (mon.2) 39 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.055675+0000 mgr.y (mgr.24419) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.055675+0000 mgr.y (mgr.24419) 12 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.055940+0000 mgr.y (mgr.24419) 13 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.055940+0000 mgr.y (mgr.24419) 13 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.094806+0000 mgr.y (mgr.24419) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.094806+0000 mgr.y (mgr.24419) 14 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.096873+0000 mgr.y (mgr.24419) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.096873+0000 mgr.y (mgr.24419) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.133990+0000 mgr.y (mgr.24419) 16 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.133990+0000 mgr.y (mgr.24419) 16 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.145044+0000 mgr.y (mgr.24419) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.145044+0000 mgr.y (mgr.24419) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.169026+0000 mgr.y (mgr.24419) 18 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.169026+0000 mgr.y (mgr.24419) 18 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.215962+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.215962+0000 mon.a (mon.0) 745 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.236116+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.236116+0000 mon.a (mon.0) 746 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.248432+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.248432+0000 mon.a (mon.0) 747 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.256286+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.256286+0000 mon.a (mon.0) 748 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.272807+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.272807+0000 mon.a (mon.0) 749 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.295058+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:44 vm11 bash[23232]: audit 2026-03-08T23:03:44.295058+0000 mon.a (mon.0) 750 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:45.530 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:45 vm06 bash[55090]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-08T23:03:45.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.807 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.808 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.808 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.808 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.808 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:45.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: Started Ceph node-exporter.b for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:03:46.089 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:45 vm11 bash[50702]: Unable to find image 'quay.io/prometheus/node-exporter:v1.7.0' locally 2026-03-08T23:03:46.089 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:45 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.453 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[55090]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.181036+0000 mgr.y (mgr.24419) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.181036+0000 mgr.y (mgr.24419) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:44.231748+0000 mgr.y (mgr.24419) 20 : audit [DBG] from='client.24472 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:44.231748+0000 mgr.y (mgr.24419) 20 : audit [DBG] from='client.24472 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.232548+0000 mgr.y (mgr.24419) 21 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.232548+0000 mgr.y (mgr.24419) 21 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.303494+0000 mgr.y (mgr.24419) 22 : cephadm [INF] Deploying daemon node-exporter.a on vm06 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: cephadm 2026-03-08T23:03:44.303494+0000 mgr.y (mgr.24419) 22 : cephadm [INF] Deploying daemon node-exporter.a on vm06 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.087158+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.087158+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.094801+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.094801+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.155971+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.155971+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.964142+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.964142+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.973491+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.973491+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.978860+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.978860+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.983509+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.983509+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.991579+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[20625]: audit 2026-03-08T23:03:45.991579+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.454 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:46 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.181036+0000 mgr.y (mgr.24419) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.181036+0000 mgr.y (mgr.24419) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:44.231748+0000 mgr.y (mgr.24419) 20 : audit [DBG] from='client.24472 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:44.231748+0000 mgr.y (mgr.24419) 20 : audit [DBG] from='client.24472 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.232548+0000 mgr.y (mgr.24419) 21 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.232548+0000 mgr.y (mgr.24419) 21 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.303494+0000 mgr.y (mgr.24419) 22 : cephadm [INF] Deploying daemon node-exporter.a on vm06 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: cephadm 2026-03-08T23:03:44.303494+0000 mgr.y (mgr.24419) 22 : cephadm [INF] Deploying daemon node-exporter.a on vm06 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.087158+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.087158+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.094801+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.094801+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.155971+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.155971+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.964142+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.964142+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.973491+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.973491+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.978860+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.978860+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.983509+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.983509+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.991579+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.455 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:46 vm06 bash[27746]: audit 2026-03-08T23:03:45.991579+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.181036+0000 mgr.y (mgr.24419) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.181036+0000 mgr.y (mgr.24419) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:03:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:44.231748+0000 mgr.y (mgr.24419) 20 : audit [DBG] from='client.24472 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:44.231748+0000 mgr.y (mgr.24419) 20 : audit [DBG] from='client.24472 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "placement": "1;vm11=a", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:03:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.232548+0000 mgr.y (mgr.24419) 21 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-08T23:03:46.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.232548+0000 mgr.y (mgr.24419) 21 : cephadm [INF] Saving service grafana spec with placement vm11=a;count:1 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.303494+0000 mgr.y (mgr.24419) 22 : cephadm [INF] Deploying daemon node-exporter.a on vm06 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: cephadm 2026-03-08T23:03:44.303494+0000 mgr.y (mgr.24419) 22 : cephadm [INF] Deploying daemon node-exporter.a on vm06 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.087158+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.087158+0000 mon.a (mon.0) 751 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.094801+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.094801+0000 mon.a (mon.0) 752 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.155971+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.155971+0000 mon.a (mon.0) 753 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.964142+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.964142+0000 mon.a (mon.0) 754 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.973491+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.973491+0000 mon.a (mon.0) 755 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.978860+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.978860+0000 mon.a (mon.0) 756 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.983509+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.983509+0000 mon.a (mon.0) 757 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.991579+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:46 vm11 bash[23232]: audit 2026-03-08T23:03:45.991579+0000 mon.a (mon.0) 758 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[20625]: cephadm 2026-03-08T23:03:45.181596+0000 mgr.y (mgr.24419) 23 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[20625]: cephadm 2026-03-08T23:03:45.181596+0000 mgr.y (mgr.24419) 23 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[20625]: cluster 2026-03-08T23:03:45.625317+0000 mgr.y (mgr.24419) 24 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[20625]: cluster 2026-03-08T23:03:45.625317+0000 mgr.y (mgr.24419) 24 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[20625]: cephadm 2026-03-08T23:03:45.998481+0000 mgr.y (mgr.24419) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm06 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[20625]: cephadm 2026-03-08T23:03:45.998481+0000 mgr.y (mgr.24419) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm06 2026-03-08T23:03:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:47 vm06 bash[27746]: cephadm 2026-03-08T23:03:45.181596+0000 mgr.y (mgr.24419) 23 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-08T23:03:47.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:47 vm06 bash[27746]: cephadm 2026-03-08T23:03:45.181596+0000 mgr.y (mgr.24419) 23 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-08T23:03:47.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:47 vm06 bash[27746]: cluster 2026-03-08T23:03:45.625317+0000 mgr.y (mgr.24419) 24 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s 2026-03-08T23:03:47.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:47 vm06 bash[27746]: cluster 2026-03-08T23:03:45.625317+0000 mgr.y (mgr.24419) 24 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s 2026-03-08T23:03:47.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:47 vm06 bash[27746]: cephadm 2026-03-08T23:03:45.998481+0000 mgr.y (mgr.24419) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm06 2026-03-08T23:03:47.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:47 vm06 bash[27746]: cephadm 2026-03-08T23:03:45.998481+0000 mgr.y (mgr.24419) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm06 2026-03-08T23:03:47.280 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[55090]: 2abcce694348: Pulling fs layer 2026-03-08T23:03:47.280 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[55090]: 455fd88e5221: Pulling fs layer 2026-03-08T23:03:47.280 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:46 vm06 bash[55090]: 324153f2810a: Pulling fs layer 2026-03-08T23:03:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[23232]: cephadm 2026-03-08T23:03:45.181596+0000 mgr.y (mgr.24419) 23 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-08T23:03:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[23232]: cephadm 2026-03-08T23:03:45.181596+0000 mgr.y (mgr.24419) 23 : cephadm [INF] Deploying daemon node-exporter.b on vm11 2026-03-08T23:03:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[23232]: cluster 2026-03-08T23:03:45.625317+0000 mgr.y (mgr.24419) 24 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s 2026-03-08T23:03:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[23232]: cluster 2026-03-08T23:03:45.625317+0000 mgr.y (mgr.24419) 24 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s 2026-03-08T23:03:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[23232]: cephadm 2026-03-08T23:03:45.998481+0000 mgr.y (mgr.24419) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm06 2026-03-08T23:03:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[23232]: cephadm 2026-03-08T23:03:45.998481+0000 mgr.y (mgr.24419) 25 : cephadm [INF] Deploying daemon alertmanager.a on vm06 2026-03-08T23:03:47.557 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:47 vm11 bash[50702]: v1.7.0: Pulling from prometheus/node-exporter 2026-03-08T23:03:47.737 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 455fd88e5221: Verifying Checksum 2026-03-08T23:03:47.738 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 455fd88e5221: Download complete 2026-03-08T23:03:47.738 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 2abcce694348: Verifying Checksum 2026-03-08T23:03:47.738 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 2abcce694348: Download complete 2026-03-08T23:03:47.738 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 2abcce694348: Pull complete 2026-03-08T23:03:47.738 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 455fd88e5221: Pull complete 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 324153f2810a: Verifying Checksum 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 324153f2810a: Download complete 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: 324153f2810a: Pull complete 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.984Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.984Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.985Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.985Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.985Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.985Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=arp 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=edac 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=os 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-08T23:03:48.030 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=stat 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=time 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=uname 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.986Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.987Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-08T23:03:48.031 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:47 vm06 bash[55090]: ts=2026-03-08T23:03:47.987Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-08T23:03:48.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[23232]: cluster 2026-03-08T23:03:47.625652+0000 mgr.y (mgr.24419) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-08T23:03:48.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[23232]: cluster 2026-03-08T23:03:47.625652+0000 mgr.y (mgr.24419) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-08T23:03:48.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[23232]: audit 2026-03-08T23:03:47.702886+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:48.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[23232]: audit 2026-03-08T23:03:47.702886+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:48.807 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[50702]: 2abcce694348: Pulling fs layer 2026-03-08T23:03:48.807 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[50702]: 455fd88e5221: Pulling fs layer 2026-03-08T23:03:48.807 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[50702]: 324153f2810a: Pulling fs layer 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:48 vm06 bash[20625]: cluster 2026-03-08T23:03:47.625652+0000 mgr.y (mgr.24419) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:48 vm06 bash[20625]: cluster 2026-03-08T23:03:47.625652+0000 mgr.y (mgr.24419) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:48 vm06 bash[20625]: audit 2026-03-08T23:03:47.702886+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:48 vm06 bash[20625]: audit 2026-03-08T23:03:47.702886+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:48 vm06 bash[27746]: cluster 2026-03-08T23:03:47.625652+0000 mgr.y (mgr.24419) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:48 vm06 bash[27746]: cluster 2026-03-08T23:03:47.625652+0000 mgr.y (mgr.24419) 26 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:48 vm06 bash[27746]: audit 2026-03-08T23:03:47.702886+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:49.036 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:48 vm06 bash[27746]: audit 2026-03-08T23:03:47.702886+0000 mon.a (mon.0) 759 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:49.306 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[50702]: 455fd88e5221: Download complete 2026-03-08T23:03:49.306 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[50702]: 2abcce694348: Verifying Checksum 2026-03-08T23:03:49.306 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:48 vm11 bash[50702]: 2abcce694348: Download complete 2026-03-08T23:03:49.306 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: 2abcce694348: Pull complete 2026-03-08T23:03:49.306 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: 324153f2810a: Verifying Checksum 2026-03-08T23:03:49.306 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: 324153f2810a: Download complete 2026-03-08T23:03:49.557 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: 455fd88e5221: Pull complete 2026-03-08T23:03:49.557 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: 324153f2810a: Pull complete 2026-03-08T23:03:49.557 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: Digest: sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80 2026-03-08T23:03:49.557 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: Status: Downloaded newer image for quay.io/prometheus/node-exporter:v1.7.0 2026-03-08T23:03:49.981 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:03:50.057 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.581Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)" 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.581Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)" 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.582Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.582Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.583Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.583Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:110 level=info msg="Enabled collectors" 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=arp 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=bcache 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=bonding 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=btrfs 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=conntrack 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=cpu 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=cpufreq 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=diskstats 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=dmi 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=edac 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.584Z caller=node_exporter.go:117 level=info collector=entropy 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=fibrechannel 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=filefd 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=filesystem 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=hwmon 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=infiniband 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=ipvs 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=loadavg 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.585Z caller=node_exporter.go:117 level=info collector=mdadm 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=meminfo 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=netclass 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=netdev 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=netstat 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=nfs 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=nfsd 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=nvme 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=os 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.586Z caller=node_exporter.go:117 level=info collector=powersupplyclass 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=pressure 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=rapl 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=schedstat 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=selinux 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=sockstat 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=softnet 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=stat 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=tapestats 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=textfile 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.587Z caller=node_exporter.go:117 level=info collector=thermal_zone 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=node_exporter.go:117 level=info collector=time 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=node_exporter.go:117 level=info collector=udp_queues 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=node_exporter.go:117 level=info collector=uname 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=node_exporter.go:117 level=info collector=vmstat 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=node_exporter.go:117 level=info collector=xfs 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=node_exporter.go:117 level=info collector=zfs 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100 2026-03-08T23:03:50.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:49 vm11 bash[50702]: ts=2026-03-08T23:03:49.588Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100 2026-03-08T23:03:50.416 INFO:teuthology.orchestra.run.vm06.stdout:[client.0] 2026-03-08T23:03:50.416 INFO:teuthology.orchestra.run.vm06.stdout: key = AQBWAK5p76VaGBAAWBN6SdFY0eTervii/mdU5A== 2026-03-08T23:03:50.585 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.585 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.585 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.586 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.591 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-08T23:03:50.591 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.client.0.keyring 2026-03-08T23:03:50.591 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-08T23:03:50.611 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph auth get-or-create client.1 mon 'allow *' osd 'allow *' mds 'allow *' mgr 'allow *' 2026-03-08T23:03:50.886 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: cluster 2026-03-08T23:03:49.625952+0000 mgr.y (mgr.24419) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: cluster 2026-03-08T23:03:49.625952+0000 mgr.y (mgr.24419) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: audit 2026-03-08T23:03:50.407776+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.106:0/926295159' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: audit 2026-03-08T23:03:50.407776+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.106:0/926295159' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: audit 2026-03-08T23:03:50.408436+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: audit 2026-03-08T23:03:50.408436+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: audit 2026-03-08T23:03:50.412791+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:50.886 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[20625]: audit 2026-03-08T23:03:50.412791+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:03:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:03:50.887 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: Started Ceph alertmanager.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: cluster 2026-03-08T23:03:49.625952+0000 mgr.y (mgr.24419) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: cluster 2026-03-08T23:03:49.625952+0000 mgr.y (mgr.24419) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: audit 2026-03-08T23:03:50.407776+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.106:0/926295159' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: audit 2026-03-08T23:03:50.407776+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.106:0/926295159' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: audit 2026-03-08T23:03:50.408436+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: audit 2026-03-08T23:03:50.408436+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: audit 2026-03-08T23:03:50.412791+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:50.887 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:50 vm06 bash[27746]: audit 2026-03-08T23:03:50.412791+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:50.887 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.887 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:03:50 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:50.951 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:50 vm06 bash[20883]: [08/Mar/2026:23:03:50] ENGINE Bus STOPPING 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: cluster 2026-03-08T23:03:49.625952+0000 mgr.y (mgr.24419) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: cluster 2026-03-08T23:03:49.625952+0000 mgr.y (mgr.24419) 27 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: audit 2026-03-08T23:03:50.407776+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.106:0/926295159' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: audit 2026-03-08T23:03:50.407776+0000 mon.c (mon.2) 40 : audit [INF] from='client.? 192.168.123.106:0/926295159' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: audit 2026-03-08T23:03:50.408436+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: audit 2026-03-08T23:03:50.408436+0000 mon.a (mon.0) 760 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:51.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: audit 2026-03-08T23:03:50.412791+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:51.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:50 vm11 bash[23232]: audit 2026-03-08T23:03:50.412791+0000 mon.a (mon.0) 761 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.0", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.952Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.952Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.953Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.106 port=9094 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.954Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.983Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.983Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.985Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-08T23:03:51.268 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:50 vm06 bash[55553]: ts=2026-03-08T23:03:50.985Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-08T23:03:51.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:51 vm06 bash[20883]: [08/Mar/2026:23:03:51] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-08T23:03:51.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:51 vm06 bash[20883]: [08/Mar/2026:23:03:51] ENGINE Bus STOPPED 2026-03-08T23:03:51.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:51 vm06 bash[20883]: [08/Mar/2026:23:03:51] ENGINE Bus STARTING 2026-03-08T23:03:51.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:51 vm06 bash[20883]: [08/Mar/2026:23:03:51] ENGINE Serving on http://:::9283 2026-03-08T23:03:51.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:03:51 vm06 bash[20883]: [08/Mar/2026:23:03:51] ENGINE Bus STARTED 2026-03-08T23:03:51.557 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:03:51 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.805812+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.805812+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.814137+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.814137+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.822889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.822889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.828395+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.828395+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: cephadm 2026-03-08T23:03:50.836489+0000 mgr.y (mgr.24419) 28 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: cephadm 2026-03-08T23:03:50.836489+0000 mgr.y (mgr.24419) 28 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-08T23:03:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.865054+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.865054+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.872196+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.872196+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.875865+0000 mon.c (mon.2) 41 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.875865+0000 mon.c (mon.2) 41 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.876258+0000 mgr.y (mgr.24419) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.876258+0000 mgr.y (mgr.24419) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.881588+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: audit 2026-03-08T23:03:50.881588+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: cephadm 2026-03-08T23:03:50.890947+0000 mgr.y (mgr.24419) 30 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:51 vm06 bash[20625]: cephadm 2026-03-08T23:03:50.890947+0000 mgr.y (mgr.24419) 30 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.805812+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.805812+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.814137+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.814137+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.822889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.822889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.828395+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.828395+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: cephadm 2026-03-08T23:03:50.836489+0000 mgr.y (mgr.24419) 28 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: cephadm 2026-03-08T23:03:50.836489+0000 mgr.y (mgr.24419) 28 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.865054+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.865054+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.872196+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.872196+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.875865+0000 mon.c (mon.2) 41 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.875865+0000 mon.c (mon.2) 41 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.876258+0000 mgr.y (mgr.24419) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.876258+0000 mgr.y (mgr.24419) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.881588+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: audit 2026-03-08T23:03:50.881588+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: cephadm 2026-03-08T23:03:50.890947+0000 mgr.y (mgr.24419) 30 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-08T23:03:52.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:51 vm06 bash[27746]: cephadm 2026-03-08T23:03:50.890947+0000 mgr.y (mgr.24419) 30 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-08T23:03:52.307 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.805812+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.805812+0000 mon.a (mon.0) 762 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.814137+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.814137+0000 mon.a (mon.0) 763 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.822889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.822889+0000 mon.a (mon.0) 764 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.828395+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.828395+0000 mon.a (mon.0) 765 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: cephadm 2026-03-08T23:03:50.836489+0000 mgr.y (mgr.24419) 28 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: cephadm 2026-03-08T23:03:50.836489+0000 mgr.y (mgr.24419) 28 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.865054+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.865054+0000 mon.a (mon.0) 766 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.872196+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.872196+0000 mon.a (mon.0) 767 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.875865+0000 mon.c (mon.2) 41 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.875865+0000 mon.c (mon.2) 41 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.876258+0000 mgr.y (mgr.24419) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.876258+0000 mgr.y (mgr.24419) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-08T23:03:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.881588+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: audit 2026-03-08T23:03:50.881588+0000 mon.a (mon.0) 768 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: cephadm 2026-03-08T23:03:50.890947+0000 mgr.y (mgr.24419) 30 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-08T23:03:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:51 vm11 bash[23232]: cephadm 2026-03-08T23:03:50.890947+0000 mgr.y (mgr.24419) 30 : cephadm [INF] Deploying daemon grafana.a on vm11 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: cluster 2026-03-08T23:03:51.626481+0000 mgr.y (mgr.24419) 31 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: cluster 2026-03-08T23:03:51.626481+0000 mgr.y (mgr.24419) 31 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: audit 2026-03-08T23:03:52.042587+0000 mgr.y (mgr.24419) 32 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: audit 2026-03-08T23:03:52.042587+0000 mgr.y (mgr.24419) 32 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: audit 2026-03-08T23:03:52.711154+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: audit 2026-03-08T23:03:52.711154+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: audit 2026-03-08T23:03:52.724199+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[20625]: audit 2026-03-08T23:03:52.724199+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:53.280 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:03:52 vm06 bash[55553]: ts=2026-03-08T23:03:52.954Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000205295s 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: cluster 2026-03-08T23:03:51.626481+0000 mgr.y (mgr.24419) 31 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: cluster 2026-03-08T23:03:51.626481+0000 mgr.y (mgr.24419) 31 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: audit 2026-03-08T23:03:52.042587+0000 mgr.y (mgr.24419) 32 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: audit 2026-03-08T23:03:52.042587+0000 mgr.y (mgr.24419) 32 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: audit 2026-03-08T23:03:52.711154+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: audit 2026-03-08T23:03:52.711154+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: audit 2026-03-08T23:03:52.724199+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:53.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:52 vm06 bash[27746]: audit 2026-03-08T23:03:52.724199+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: cluster 2026-03-08T23:03:51.626481+0000 mgr.y (mgr.24419) 31 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: cluster 2026-03-08T23:03:51.626481+0000 mgr.y (mgr.24419) 31 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: audit 2026-03-08T23:03:52.042587+0000 mgr.y (mgr.24419) 32 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: audit 2026-03-08T23:03:52.042587+0000 mgr.y (mgr.24419) 32 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: audit 2026-03-08T23:03:52.711154+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: audit 2026-03-08T23:03:52.711154+0000 mon.a (mon.0) 769 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: audit 2026-03-08T23:03:52.724199+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:53.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:52 vm11 bash[23232]: audit 2026-03-08T23:03:52.724199+0000 mon.c (mon.2) 42 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:03:55.244 INFO:teuthology.orchestra.run.vm11.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.b/config 2026-03-08T23:03:55.263 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:54 vm11 bash[23232]: cluster 2026-03-08T23:03:53.626824+0000 mgr.y (mgr.24419) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:55.263 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:54 vm11 bash[23232]: cluster 2026-03-08T23:03:53.626824+0000 mgr.y (mgr.24419) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:54 vm06 bash[20625]: cluster 2026-03-08T23:03:53.626824+0000 mgr.y (mgr.24419) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:54 vm06 bash[20625]: cluster 2026-03-08T23:03:53.626824+0000 mgr.y (mgr.24419) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:55.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:54 vm06 bash[27746]: cluster 2026-03-08T23:03:53.626824+0000 mgr.y (mgr.24419) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:55.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:54 vm06 bash[27746]: cluster 2026-03-08T23:03:53.626824+0000 mgr.y (mgr.24419) 33 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:03:55.785 INFO:teuthology.orchestra.run.vm11.stdout:[client.1] 2026-03-08T23:03:55.785 INFO:teuthology.orchestra.run.vm11.stdout: key = AQBbAK5p+OARLRAAR4lEvJVaj8V7i/QMjzxJ/Q== 2026-03-08T23:03:55.890 DEBUG:teuthology.orchestra.run.vm11:> set -ex 2026-03-08T23:03:55.890 DEBUG:teuthology.orchestra.run.vm11:> sudo dd of=/etc/ceph/ceph.client.1.keyring 2026-03-08T23:03:55.890 DEBUG:teuthology.orchestra.run.vm11:> sudo chmod 0644 /etc/ceph/ceph.client.1.keyring 2026-03-08T23:03:55.917 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-08T23:03:55.917 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-08T23:03:55.917 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph mgr dump --format=json 2026-03-08T23:03:56.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:55 vm11 bash[23232]: audit 2026-03-08T23:03:55.754141+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.111:0/1467160876' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:55 vm11 bash[23232]: audit 2026-03-08T23:03:55.754141+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.111:0/1467160876' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:55 vm11 bash[23232]: audit 2026-03-08T23:03:55.755959+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:55 vm11 bash[23232]: audit 2026-03-08T23:03:55.755959+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:55 vm11 bash[23232]: audit 2026-03-08T23:03:55.783026+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:56.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:55 vm11 bash[23232]: audit 2026-03-08T23:03:55.783026+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:55 vm06 bash[20625]: audit 2026-03-08T23:03:55.754141+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.111:0/1467160876' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:55 vm06 bash[20625]: audit 2026-03-08T23:03:55.754141+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.111:0/1467160876' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:55 vm06 bash[20625]: audit 2026-03-08T23:03:55.755959+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:55 vm06 bash[20625]: audit 2026-03-08T23:03:55.755959+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:55 vm06 bash[20625]: audit 2026-03-08T23:03:55.783026+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:55 vm06 bash[20625]: audit 2026-03-08T23:03:55.783026+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:56.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:55 vm06 bash[27746]: audit 2026-03-08T23:03:55.754141+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.111:0/1467160876' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:55 vm06 bash[27746]: audit 2026-03-08T23:03:55.754141+0000 mon.b (mon.1) 34 : audit [INF] from='client.? 192.168.123.111:0/1467160876' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:55 vm06 bash[27746]: audit 2026-03-08T23:03:55.755959+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:55 vm06 bash[27746]: audit 2026-03-08T23:03:55.755959+0000 mon.a (mon.0) 770 : audit [INF] from='client.? ' entity='client.admin' cmd=[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]: dispatch 2026-03-08T23:03:56.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:55 vm06 bash[27746]: audit 2026-03-08T23:03:55.783026+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:56.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:55 vm06 bash[27746]: audit 2026-03-08T23:03:55.783026+0000 mon.a (mon.0) 771 : audit [INF] from='client.? ' entity='client.admin' cmd='[{"prefix": "auth get-or-create", "entity": "client.1", "caps": ["mon", "allow *", "osd", "allow *", "mds", "allow *", "mgr", "allow *"]}]': finished 2026-03-08T23:03:57.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:56 vm06 bash[20625]: cluster 2026-03-08T23:03:55.627334+0000 mgr.y (mgr.24419) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:57.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:56 vm06 bash[20625]: cluster 2026-03-08T23:03:55.627334+0000 mgr.y (mgr.24419) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:57.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:56 vm06 bash[27746]: cluster 2026-03-08T23:03:55.627334+0000 mgr.y (mgr.24419) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:57.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:56 vm06 bash[27746]: cluster 2026-03-08T23:03:55.627334+0000 mgr.y (mgr.24419) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:57.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:56 vm11 bash[23232]: cluster 2026-03-08T23:03:55.627334+0000 mgr.y (mgr.24419) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:57.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:56 vm11 bash[23232]: cluster 2026-03-08T23:03:55.627334+0000 mgr.y (mgr.24419) 34 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 16 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:03:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:58 vm06 bash[20625]: cluster 2026-03-08T23:03:57.627649+0000 mgr.y (mgr.24419) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:03:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:03:58 vm06 bash[20625]: cluster 2026-03-08T23:03:57.627649+0000 mgr.y (mgr.24419) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:03:59.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:58 vm06 bash[27746]: cluster 2026-03-08T23:03:57.627649+0000 mgr.y (mgr.24419) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:03:59.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:03:58 vm06 bash[27746]: cluster 2026-03-08T23:03:57.627649+0000 mgr.y (mgr.24419) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:03:59.451 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:58 vm11 bash[23232]: cluster 2026-03-08T23:03:57.627649+0000 mgr.y (mgr.24419) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:03:59.451 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:58 vm11 bash[23232]: cluster 2026-03-08T23:03:57.627649+0000 mgr.y (mgr.24419) 35 : cluster [DBG] pgmap v12: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:00.177 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.177 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.178 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.178 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.178 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:03:59 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.178 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:04:00.428 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 systemd[1]: Started Ceph grafana.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:04:00.428 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427003658Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-08T23:04:00Z 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.42731037Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.42731544Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427317954Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427319918Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427323464Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427325298Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427327171Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427329386Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.42733139Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427333073Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427334866Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427337782Z level=info msg=Target target=[all] 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427342811Z level=info msg="Path Home" path=/usr/share/grafana 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427346508Z level=info msg="Path Data" path=/var/lib/grafana 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427350145Z level=info msg="Path Logs" path=/var/log/grafana 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427351828Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427353561Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=settings t=2026-03-08T23:04:00.427356817Z level=info msg="App mode production" 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=sqlstore t=2026-03-08T23:04:00.427795677Z level=info msg="Connecting to DB" dbtype=sqlite3 2026-03-08T23:04:00.429 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=sqlstore t=2026-03-08T23:04:00.427807509Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r----- 2026-03-08T23:04:00.561 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:00.679 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.42938245Z level=info msg="Starting DB migrations" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.430681036Z level=info msg="Executing migration" id="create migration_log table" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.44166907Z level=info msg="Migration successfully executed" id="create migration_log table" duration=10.985268ms 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.44360718Z level=info msg="Executing migration" id="create user table" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.444111762Z level=info msg="Migration successfully executed" id="create user table" duration=504.972µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.445886657Z level=info msg="Executing migration" id="add unique index user.login" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.446355733Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=469.396µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.44768221Z level=info msg="Executing migration" id="add unique index user.email" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.448092447Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=409.997µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.449378188Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.449785238Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=406.981µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.451484893Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.451960021Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=475.258µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.452984064Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.453996354Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.01205ms 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.457552887Z level=info msg="Executing migration" id="create user table v2" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.458102814Z level=info msg="Migration successfully executed" id="create user table v2" duration=550.016µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.45941754Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.45984613Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=428.611µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.460857299Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.461290038Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=430.375µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.462612127Z level=info msg="Executing migration" id="copy data_source v1 to v2" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.462903942Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=292.717µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.463843797Z level=info msg="Executing migration" id="Drop old table user_v1" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.464200694Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=356.776µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.465589207Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.466127774Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=536.892µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.467099068Z level=info msg="Executing migration" id="Update user table charset" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.467111561Z level=info msg="Migration successfully executed" id="Update user table charset" duration=13.045µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.46810716Z level=info msg="Executing migration" id="Add last_seen_at column to user" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.468613436Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=506.025µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.46965964Z level=info msg="Executing migration" id="Add missing user data" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.469864803Z level=info msg="Migration successfully executed" id="Add missing user data" duration=208.109µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.471129044Z level=info msg="Executing migration" id="Add is_disabled column to user" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.471665706Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=536.451µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.472574765Z level=info msg="Executing migration" id="Add index user.login/user.email" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.473001392Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=426.596µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.474113138Z level=info msg="Executing migration" id="Add is_service_account column to user" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.474642537Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=529.299µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.476156354Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.478839817Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=2.682811ms 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.480085033Z level=info msg="Executing migration" id="Add uid column to user" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.480695583Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=610.58µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.481979381Z level=info msg="Executing migration" id="Update uid column values for users" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.482178362Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=198.962µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.483533543Z level=info msg="Executing migration" id="Add unique index user_uid" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.484144074Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=616.071µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.485864447Z level=info msg="Executing migration" id="create temp user table v1-7" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.486371985Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=506.094µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.487774946Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.488208396Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=433.401µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.489498966Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.489922507Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=423.311µs 2026-03-08T23:04:00.680 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.491588779Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.492031617Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=442.988µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.493305676Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.493727104Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=421.537µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.494987427Z level=info msg="Executing migration" id="Update temp_user table charset" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.495001524Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=14.427µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.496586684Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.497031956Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=445.181µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.498085353Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.498503895Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=418.442µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.499604742Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.500025497Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=419.312µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.501163843Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.501583848Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=419.724µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.502641354Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.503775611Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=1.134769ms 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.504702463Z level=info msg="Executing migration" id="create temp_user v2" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.505192939Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=490.545µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.506594126Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.507076567Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=482.491µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.50806825Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.508548105Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=480.066µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.509586374Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.510079235Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=491.418µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.511037976Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.511496473Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=458.486µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.513144781Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.513434362Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=289.31µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.514538023Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.514905621Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=367.757µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.515866605Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.516145305Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=277.118µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.517652992Z level=info msg="Executing migration" id="create star table" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.51805387Z level=info msg="Migration successfully executed" id="create star table" duration=400.658µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.519028391Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.519469354Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=439.701µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.520658285Z level=info msg="Executing migration" id="create org table v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.521088598Z level=info msg="Migration successfully executed" id="create org table v1" duration=430.332µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.522209552Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.522645065Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=434.532µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.524154645Z level=info msg="Executing migration" id="create org_user table v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.524545796Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=391.12µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.525639889Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.526089699Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=449.871µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.527237243Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.527668859Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=431.475µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.528826041Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.529264901Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=437.687µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.5307961Z level=info msg="Executing migration" id="Update org table charset" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.530808003Z level=info msg="Migration successfully executed" id="Update org table charset" duration=12.474µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.531882029Z level=info msg="Executing migration" id="Update org_user table charset" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.5318938Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=12.194µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.532928243Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.533168672Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=240.208µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.534391777Z level=info msg="Executing migration" id="create dashboard table" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.534924932Z level=info msg="Migration successfully executed" id="create dashboard table" duration=532.654µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.536313065Z level=info msg="Executing migration" id="add index dashboard.account_id" 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.536789615Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=476.671µs 2026-03-08T23:04:00.681 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.537951194Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.538419629Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=468.375µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.539579125Z level=info msg="Executing migration" id="create dashboard_tag table" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.539973972Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=394.898µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.541413972Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.54187303Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=459.028µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.543068402Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.543506922Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=439.541µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.544457557Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.546332569Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=1.87373ms 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.547919103Z level=info msg="Executing migration" id="create dashboard v2" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.548254549Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=334.044µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.549136817Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.549661266Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=524.349µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.550902755Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.551366983Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=464.237µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.552873617Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.553182343Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=308.625µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.5543318Z level=info msg="Executing migration" id="drop table dashboard_v1" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.55499064Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=658.52µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.556111053Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.556199709Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=88.516µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.557829903Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.558746375Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=913.215µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.559892466Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.560747734Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=854.858µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.562067088Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.563694978Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.613584ms 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.564891813Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.565450217Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=557.071µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.567087294Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.567836244Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=748.739µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.568869303Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.569469695Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=601.372µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.571856142Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.572379148Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=523.347µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.573609728Z level=info msg="Executing migration" id="Update dashboard table charset" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.573640695Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=31.359µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.574740158Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.574753143Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=13.005µs 2026-03-08T23:04:00.682 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.575761696Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.576553746Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=792.25µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.577820792Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.578554693Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=734.202µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.579529023Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.580257573Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=728.441µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.581233617Z level=info msg="Executing migration" id="Add column uid in dashboard" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.581962448Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=728.97µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.584660457Z level=info msg="Executing migration" id="Update uid column values in dashboard" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.584858558Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=199.463µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.601250444Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.601728848Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=479.043µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.607411542Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.607755244Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=343.922µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.608907565Z level=info msg="Executing migration" id="Update dashboard title length" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.608919839Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=12.704µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.610431753Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.610805371Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=369.47µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.612316583Z level=info msg="Executing migration" id="create dashboard_provisioning" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.612619218Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=302.846µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.614050442Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.615706856Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=1.656183ms 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.616799386Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.617119474Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=320.308µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.618719944Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.619063676Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=344.123µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.620283013Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.620608211Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=324.957µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.621933155Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.622083897Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=150.842µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.623122236Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.623508067Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=385.549µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.624742553Z level=info msg="Executing migration" id="Add check_sum column" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.625571241Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=828.387µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.62667428Z level=info msg="Executing migration" id="Add index for dashboard_title" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.627189443Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=515.353µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.628215039Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.628414801Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=200.354µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.629836908Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.630043865Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=206.947µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.631098665Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.631569885Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=471.089µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.632704794Z level=info msg="Executing migration" id="Add isPublic for dashboard" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.633571243Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=866.268µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.635077205Z level=info msg="Executing migration" id="create data_source table" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.635589112Z level=info msg="Migration successfully executed" id="create data_source table" duration=511.665µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.636898166Z level=info msg="Executing migration" id="add index data_source.account_id" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.637418258Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=520.742µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.638673462Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.639198313Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=523.868µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.640703534Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.641170477Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=467.213µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.642321065Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.642829555Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=507.728µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.643963333Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.645856389Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=1.891873ms 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.647433104Z level=info msg="Executing migration" id="create data_source table v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.647936944Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=503.58µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.64896752Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.649499302Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=531.853µs 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.650742284Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 2026-03-08T23:04:00.683 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.651220088Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=476.28µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.652764361Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.653156293Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=391.772µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.654347037Z level=info msg="Executing migration" id="Add column with_credentials" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.655232322Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=885.054µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.656338297Z level=info msg="Executing migration" id="Add secure json data column" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.657214874Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=876.977µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.658852563Z level=info msg="Executing migration" id="Update data_source table charset" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.658916913Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=66.554µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.660358316Z level=info msg="Executing migration" id="Update initial version to 1" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.660630865Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=272.408µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.661865521Z level=info msg="Executing migration" id="Add read_only data column" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.662828931Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=963.479µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.664046104Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.66426428Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=218.078µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.665243339Z level=info msg="Executing migration" id="Update json_data with nulls" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.665459684Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=216.135µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.666604742Z level=info msg="Executing migration" id="Add uid column" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.667521224Z level=info msg="Migration successfully executed" id="Add uid column" duration=916.553µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.668476178Z level=info msg="Executing migration" id="Update uid value" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.668689015Z level=info msg="Migration successfully executed" id="Update uid value" duration=212.897µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.670101454Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.670939248Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=837.755µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.671982006Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.672416517Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=434.201µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.673595921Z level=info msg="Executing migration" id="create api_key table" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.674060369Z level=info msg="Migration successfully executed" id="create api_key table" duration=464.959µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.67567276Z level=info msg="Executing migration" id="add index api_key.account_id" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.676136054Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=463.285µs 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.677277677Z level=info msg="Executing migration" id="add index api_key.key" 2026-03-08T23:04:00.684 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.677737606Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=460.951µs 2026-03-08T23:04:00.866 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:04:00.877 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:04:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:04:00.923 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":21,"flags":0,"active_gid":24419,"active_name":"y","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6800","nonce":1959071245}]},"active_addr":"192.168.123.106:6800/1959071245","active_change":"2026-03-08T23:03:37.586243+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24448,"name":"x","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","prometheus","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.106:8443/","prometheus":"http://192.168.123.106:9283/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":66,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":4147557555}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":3568512739}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":2640369936}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.106:0","nonce":627252962}]}]} 2026-03-08T23:04:00.925 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-08T23:04:00.925 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-08T23:04:00.925 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd dump --format=json 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.679030821Z level=info msg="Executing migration" id="add index api_key.account_id_name" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.681397532Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=2.36674ms 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.683091997Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.683560372Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=468.455µs 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.684628758Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.685080982Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=452.124µs 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.68673356Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.687170775Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=437.365µs 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.688294404Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.69044437Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=2.149916ms 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.691720062Z level=info msg="Executing migration" id="create api_key table v2" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.692154304Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=433.811µs 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.69380698Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.694269355Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=462.624µs 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.695438939Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.696057694Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=619.547µs 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.697542277Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 2026-03-08T23:04:00.931 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.697999532Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=456.873µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.699381543Z level=info msg="Executing migration" id="copy api_key v1 to v2" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.699667818Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=285.985µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.700619054Z level=info msg="Executing migration" id="Drop old table api_key_v1" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.700993314Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=374.179µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.702367861Z level=info msg="Executing migration" id="Update api_key table charset" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.702393859Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.64µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.703371986Z level=info msg="Executing migration" id="Add expires to api_key table" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.704392953Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.020555ms 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.705591982Z level=info msg="Executing migration" id="Add service account foreign key" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.706492204Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=899.08µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.707467586Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.707659063Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=191.637µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.708927703Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.709832593Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=905.521µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.710967433Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.711864007Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=896.034µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.713110946Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.713568571Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=451.314µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.715101013Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.71550627Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=405.598µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.716480982Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.716937714Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=456.561µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.718095767Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.718544315Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=448.588µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.72008404Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.720540684Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=456.734µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.721733541Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.722179605Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=446.284µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.723415173Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.723573509Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=158.405µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.724956762Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.724982921Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=24.777µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.726194604Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.727494712Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.297864ms 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.728647215Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.729645109Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=997.975µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.731036207Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.731211744Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=175.176µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.732623652Z level=info msg="Executing migration" id="create quota table v1" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.733076107Z level=info msg="Migration successfully executed" id="create quota table v1" duration=452.446µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.734260519Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.734726259Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=464.027µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.736125854Z level=info msg="Executing migration" id="Update quota table charset" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.736151411Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=25.998µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.737595138Z level=info msg="Executing migration" id="create plugin_setting table" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.738064594Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=469.276µs 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.739591407Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 2026-03-08T23:04:00.932 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.740074829Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=483.182µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.741292805Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.742285228Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=992.242µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.743729245Z level=info msg="Executing migration" id="Update plugin_setting table charset" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.743754893Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.089µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.744793212Z level=info msg="Executing migration" id="create session table" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.745342217Z level=info msg="Migration successfully executed" id="create session table" duration=548.814µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.746589306Z level=info msg="Executing migration" id="Drop old table playlist table" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.74676755Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=178.113µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.747792625Z level=info msg="Executing migration" id="Drop old table playlist_item table" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.747938457Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=145.581µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.749447044Z level=info msg="Executing migration" id="create playlist table v2" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.749884212Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=436.806µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.75170937Z level=info msg="Executing migration" id="create playlist item table v2" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.752182494Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=472.604µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.753375974Z level=info msg="Executing migration" id="Update playlist table charset" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.753532816Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=157.483µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.755008483Z level=info msg="Executing migration" id="Update playlist_item table charset" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.755035774Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.712µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.756021014Z level=info msg="Executing migration" id="Add playlist column created_at" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.757370925Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=1.350933ms 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.758494382Z level=info msg="Executing migration" id="Add playlist column updated_at" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.759555404Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=1.059038ms 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.761510417Z level=info msg="Executing migration" id="drop preferences table v2" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.761579677Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=69.499µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.763248233Z level=info msg="Executing migration" id="drop preferences table v3" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.763395748Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=147.384µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.764421584Z level=info msg="Executing migration" id="create preferences table v3" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.764914174Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=491.188µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.766122361Z level=info msg="Executing migration" id="Update preferences table charset" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.766148559Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=26.589µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.767557952Z level=info msg="Executing migration" id="Add column team_id in preferences" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.76864911Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=1.091058ms 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.76984325Z level=info msg="Executing migration" id="Update team_id column values in preferences" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.770027986Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=184.606µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.771061567Z level=info msg="Executing migration" id="Add column week_start in preferences" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.772087292Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=1.025505ms 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.773005206Z level=info msg="Executing migration" id="Add column preferences.json_data" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.774403669Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=1.398442ms 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.775808723Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.775853146Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=40.776µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.777155378Z level=info msg="Executing migration" id="Add preferences index org_id" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.777649651Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=494.864µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.778798197Z level=info msg="Executing migration" id="Add preferences index user_id" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.779248508Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=450.121µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.780681003Z level=info msg="Executing migration" id="create alert table v1" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.78123638Z level=info msg="Migration successfully executed" id="create alert table v1" duration=555.266µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.782434578Z level=info msg="Executing migration" id="add index alert org_id & id " 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.782934823Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=500.275µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.784417752Z level=info msg="Executing migration" id="add index alert state" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.784867292Z level=info msg="Migration successfully executed" id="add index alert state" duration=449.55µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.78605979Z level=info msg="Executing migration" id="add index alert dashboard_id" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.786501364Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=441.795µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.787629451Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.788036592Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=407.141µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.78949171Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.789953913Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=461.892µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.79111937Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.791577907Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=458.737µs 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.792485972Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.795232533Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=2.745088ms 2026-03-08T23:04:00.933 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.796669666Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.797095472Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=426.557µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.798242414Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.798693958Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=451.404µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.799875424Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.800127876Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=252.061µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.800999865Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.801408598Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=408.674µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.80293047Z level=info msg="Executing migration" id="create alert_notification table v1" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.803357608Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=426.988µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.804379197Z level=info msg="Executing migration" id="Add column is_default" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.805525578Z level=info msg="Migration successfully executed" id="Add column is_default" duration=1.147312ms 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.807031671Z level=info msg="Executing migration" id="Add column frequency" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.808163284Z level=info msg="Migration successfully executed" id="Add column frequency" duration=1.131484ms 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.809075818Z level=info msg="Executing migration" id="Add column send_reminder" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.810203475Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=1.127466ms 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.811261681Z level=info msg="Executing migration" id="Add column disable_resolve_message" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.812402272Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=1.140351ms 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.813571676Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.814017208Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=445.402µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.815398819Z level=info msg="Executing migration" id="Update alert table charset" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.815425589Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=27.02µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.816367117Z level=info msg="Executing migration" id="Update alert_notification table charset" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.816392485Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=25.878µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.817475738Z level=info msg="Executing migration" id="create notification_journal table v1" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.817891505Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=415.396µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.819311988Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.819772578Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=460.58µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.820891198Z level=info msg="Executing migration" id="drop alert_notification_journal" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.821346136Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=454.989µs 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.822484062Z level=info msg="Executing migration" id="create alert_notification_state table v1" 2026-03-08T23:04:00.934 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.822925246Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=441.144µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.824125328Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.824588212Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=463.076µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.825711411Z level=info msg="Executing migration" id="Add for to alert table" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.826912594Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=1.201033ms 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.82792274Z level=info msg="Executing migration" id="Add column uid in alert_notification" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.829182172Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=1.259282ms 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.830613065Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.83082399Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=210.844µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.831770939Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.832234103Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=463.284µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.833348886Z level=info msg="Executing migration" id="Remove unique index org_id_name" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.833798896Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=449.871µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.835222446Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.836427886Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=1.205351ms 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.83755936Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.837601569Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=41.778µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.838729896Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.839187801Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=457.876µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.840499631Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.841004846Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=505.225µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.842347534Z level=info msg="Executing migration" id="Drop old annotation table v4" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.842497413Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=149.639µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.843537425Z level=info msg="Executing migration" id="create annotation table v5" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.844013145Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=474.206µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.84549358Z level=info msg="Executing migration" id="add index annotation 0 v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.845948359Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=454.438µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.847236756Z level=info msg="Executing migration" id="add index annotation 1 v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.847692356Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=455.57µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.848867742Z level=info msg="Executing migration" id="add index annotation 2 v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.849379818Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=510.363µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.850852849Z level=info msg="Executing migration" id="add index annotation 3 v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.851352293Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=499.393µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.852564386Z level=info msg="Executing migration" id="add index annotation 4 v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.853092032Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=527.557µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.854282956Z level=info msg="Executing migration" id="Update annotation table charset" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.854309155Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=26.4µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.855611838Z level=info msg="Executing migration" id="Add column region_id to annotation table" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.856907368Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=1.29549ms 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.858149909Z level=info msg="Executing migration" id="Drop category_id index" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.858602123Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=452.395µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.859615576Z level=info msg="Executing migration" id="Add column tags to annotation table" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.860839453Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=1.223676ms 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.86229461Z level=info msg="Executing migration" id="Create annotation_tag table v2" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.862721027Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=426.306µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.863688323Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.864152921Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=464.398µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.865284425Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.865767306Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=482.711µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.867433778Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.872705545Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=5.270445ms 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.874412133Z level=info msg="Executing migration" id="Create annotation_tag table v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.875139162Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=727.819µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.876330346Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.876960222Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=630.197µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.878751248Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.879268534Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=517.257µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.880418262Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.880926792Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=508.329µs 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.882155286Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 2026-03-08T23:04:00.935 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.882396357Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=238.476µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.884025059Z level=info msg="Executing migration" id="Add created time to annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.885405858Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=1.380068ms 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.886722296Z level=info msg="Executing migration" id="Add updated time to annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.888021002Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=1.298415ms 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.889029395Z level=info msg="Executing migration" id="Add index for created in annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.889536402Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=506.838µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.890976072Z level=info msg="Executing migration" id="Add index for updated in annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.891441811Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=464.016µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.892574948Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.892790951Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=215.863µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.893945869Z level=info msg="Executing migration" id="Add epoch_end column" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.895254352Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=1.308604ms 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.896419017Z level=info msg="Executing migration" id="Add index for epoch_end" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.896886851Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=465.821µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.898408353Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.8985996Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=191.136µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.899784444Z level=info msg="Executing migration" id="Move region to single row" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.90004456Z level=info msg="Migration successfully executed" id="Move region to single row" duration=259.745µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.901033527Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.901527289Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=493.732µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.903025347Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.903481359Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=456.062µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.904488089Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.904966913Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=478.684µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.906461895Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.906949386Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=487.411µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.907960144Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.90840738Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=447.225µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.90954821Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.910038687Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=490.397µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.911327744Z level=info msg="Executing migration" id="Increase tags column to length 4096" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.911375513Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=48.04µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.912637561Z level=info msg="Executing migration" id="create test_data table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.913125281Z level=info msg="Migration successfully executed" id="create test_data table" duration=488.483µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.914428355Z level=info msg="Executing migration" id="create dashboard_version table v1" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.914895078Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=466.532µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.916538227Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.917029734Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=491.457µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.918259602Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.918792867Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=532.764µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.920065063Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.920261982Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=198.211µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.921292416Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.92158876Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=295.882µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.923161517Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.923207062Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=46.387µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.924312888Z level=info msg="Executing migration" id="create team table" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.924766354Z level=info msg="Migration successfully executed" id="create team table" duration=453.335µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.926052277Z level=info msg="Executing migration" id="add index team.org_id" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.926597815Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=545.919µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.928163681Z level=info msg="Executing migration" id="add unique index team_org_id_name" 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.928638246Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=475.578µs 2026-03-08T23:04:00.936 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.929827959Z level=info msg="Executing migration" id="Add column uid in team" 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: cluster 2026-03-08T23:03:59.627991+0000 mgr.y (mgr.24419) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: cluster 2026-03-08T23:03:59.627991+0000 mgr.y (mgr.24419) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.269228+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.269228+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.275115+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.275115+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.282269+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.282269+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.289469+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.289469+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.302025+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.302025+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.864801+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.106:0/82754349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.185 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:00 vm11 bash[23232]: audit 2026-03-08T23:04:00.864801+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.106:0/82754349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.931233635Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=1.404283ms 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.934652841Z level=info msg="Executing migration" id="Update uid column values in team" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.934904771Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=252.181µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.935947869Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.936508195Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=560.186µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.938214513Z level=info msg="Executing migration" id="create team member table" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.938799415Z level=info msg="Migration successfully executed" id="create team member table" duration=585.174µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.94012955Z level=info msg="Executing migration" id="add index team_member.org_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.940642969Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=513.559µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.941861064Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.94242117Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=560.287µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.944180617Z level=info msg="Executing migration" id="add index team_member.team_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.944752144Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=571.708µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.946033737Z level=info msg="Executing migration" id="Add column email to team table" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.947821366Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=1.789112ms 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.949018553Z level=info msg="Executing migration" id="Add column external to team_member table" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.95107314Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=2.054328ms 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.953802248Z level=info msg="Executing migration" id="Add column permission to team_member table" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.955446769Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=1.6441ms 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.956553567Z level=info msg="Executing migration" id="create dashboard acl table" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.957084448Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=530.6µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.958428168Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.959131963Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=702.151µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.96077994Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.961448078Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=668.108µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.962804762Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.963362064Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=557.061µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.964591981Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.965155422Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=563.361µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.966712221Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.967230449Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=518.739µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.968570051Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.969153089Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=583.269µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.970398206Z level=info msg="Executing migration" id="add index dashboard_permission" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.971055934Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=657.508µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.972736723Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.973134577Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=397.783µs 2026-03-08T23:04:01.186 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.974501129Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.974744824Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=243.534µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.976202216Z level=info msg="Executing migration" id="create tag table" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.976657987Z level=info msg="Migration successfully executed" id="create tag table" duration=454.72µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.979120617Z level=info msg="Executing migration" id="add index tag.key_value" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.979703626Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=583.401µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.980936549Z level=info msg="Executing migration" id="create login attempt table" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.98141887Z level=info msg="Migration successfully executed" id="create login attempt table" duration=481.84µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.982889606Z level=info msg="Executing migration" id="add index login_attempt.username" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.983401823Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=512.587µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.984593138Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.985135922Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=542.944µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.986358365Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.990655993Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=4.295564ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.992410019Z level=info msg="Executing migration" id="create login_attempt v2" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.992971588Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=560.937µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.994302704Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.994908475Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=604.249µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.996165773Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.996444234Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=278.181µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.997823119Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.998200594Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=377.454µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.999148254Z level=info msg="Executing migration" id="create user auth table" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:00 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:00.999555946Z level=info msg="Migration successfully executed" id="create user auth table" duration=406.6µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.000432223Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.001004882Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=572.017µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.002474297Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.002529951Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=55.865µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.003708712Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.005311697Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=1.602875ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.006490087Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.008012982Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=1.522575ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.009000055Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.010519153Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=1.519148ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.01214507Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.013793539Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=1.648538ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.01505257Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.015619109Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=566.61µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.016930158Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.018511291Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=1.580762ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.020196498Z level=info msg="Executing migration" id="create server_lock table" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.020655806Z level=info msg="Migration successfully executed" id="create server_lock table" duration=459.037µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.02196418Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.022449246Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=485.086µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.024101893Z level=info msg="Executing migration" id="create user auth token table" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.02460404Z level=info msg="Migration successfully executed" id="create user auth token table" duration=502.328µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.02580836Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.026355952Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=546.05µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.027598794Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.028134745Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=535.952µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.029335758Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.029910562Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=574.944µs 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.031404623Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.033092726Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=1.687992ms 2026-03-08T23:04:01.187 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.03429458Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.034789685Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=496.118µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.036246005Z level=info msg="Executing migration" id="create cache_data table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.036687951Z level=info msg="Migration successfully executed" id="create cache_data table" duration=441.966µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.037874137Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.038347781Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=473.704µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.04257691Z level=info msg="Executing migration" id="create short_url table v1" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.043244908Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=669.31µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.044800884Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.045364327Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=563.113µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.046526898Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.046585096Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=58.409µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.047778145Z level=info msg="Executing migration" id="delete alert_definition table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.047848697Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=70.162µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.048819982Z level=info msg="Executing migration" id="recreate alert_definition table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.049299166Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=479.114µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.050840536Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.0513574Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=516.945µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.052635888Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.053247491Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=611.603µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.054457791Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.054518414Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=60.593µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.055955439Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.056504695Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=548.193µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.057682805Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.058174213Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=491.437µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.05920613Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.059731732Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=526.965µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.060952963Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.061493643Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=540.6µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.062611651Z level=info msg="Executing migration" id="Add column paused in alert_definition" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.064386947Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=1.775185ms 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.065453028Z level=info msg="Executing migration" id="drop alert_definition table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.065946471Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=493.271µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.067116725Z level=info msg="Executing migration" id="delete alert_definition_version table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.067186175Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=69.47µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.068183228Z level=info msg="Executing migration" id="recreate alert_definition_version table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.068642747Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=459.389µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.070028645Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.070541702Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=513.018µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.071488361Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.071998654Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=510.515µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.072932468Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.072987731Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=55.804µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.074417081Z level=info msg="Executing migration" id="drop alert_definition_version table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.074919559Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=502.358µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.075869094Z level=info msg="Executing migration" id="create alert_instance table" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.076339682Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=470.177µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.077229775Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.077791574Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=561.829µs 2026-03-08T23:04:01.188 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.079205144Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.079773175Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=568.201µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.080878289Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.082916948Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=2.038578ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.084163015Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.08463698Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=474.126µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.085992822Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.086482327Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=489.535µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.087516539Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.09509265Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=7.57565ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.096642685Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.104405433Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=7.762158ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.105676728Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.106473616Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=796.617µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.107540369Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.10805444Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=514.391µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.109617168Z level=info msg="Executing migration" id="add current_reason column related to current_state" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.111505335Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=1.888017ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.112467362Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.114190991Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=1.723469ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.115410468Z level=info msg="Executing migration" id="create alert_rule table" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.115879876Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=469.277µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.117219959Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.117694666Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=474.566µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.118809197Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.119279756Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=470.528µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.120389078Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.121054451Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=655.714µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.122529226Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.122568939Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=30.889µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.123783829Z level=info msg="Executing migration" id="add column for to alert_rule" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.125635528Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=1.85163ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.126743627Z level=info msg="Executing migration" id="add column annotations to alert_rule" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.128584866Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=1.84123ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.130063388Z level=info msg="Executing migration" id="add column labels to alert_rule" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.131886202Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=1.822474ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.132887522Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.133422352Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=535.09µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.134550109Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.135089766Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=539.758µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.136349649Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.138241303Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=1.892145ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.14293517Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.144736274Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=1.800773ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.146017697Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.146497985Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=480.237µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.14816584Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.149960572Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=1.794662ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.151180691Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.152985743Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=1.804942ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.154297673Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.154328119Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=31.169µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.155800821Z level=info msg="Executing migration" id="create alert_rule_version table" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.156339507Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=538.025µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.158201774Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.158805994Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=603.747µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.16027155Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.16088176Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=610.61µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.162737746Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.162769757Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=33.512µs 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.164157349Z level=info msg="Executing migration" id="add column for to alert_rule_version" 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.166086962Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=1.929394ms 2026-03-08T23:04:01.189 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.16746728Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.16942595Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=1.958489ms 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.170599982Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.172480274Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=1.880442ms 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.174040237Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.175989838Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=1.949261ms 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.177214506Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.179097623Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=1.882886ms 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.180216493Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.180272999Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=56.666µs 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.181722457Z level=info msg="Executing migration" id=create_alert_configuration_table 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.182089402Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=366.776µs 2026-03-08T23:04:01.190 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.183235202Z level=info msg="Executing migration" id="Add column default in alert_configuration" 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: cluster 2026-03-08T23:03:59.627991+0000 mgr.y (mgr.24419) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: cluster 2026-03-08T23:03:59.627991+0000 mgr.y (mgr.24419) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.269228+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.269228+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.275115+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.275115+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.282269+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.282269+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.289469+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.289469+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.302025+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.302025+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.864801+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.106:0/82754349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[20625]: audit 2026-03-08T23:04:00.864801+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.106:0/82754349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.279 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:00 vm06 bash[55553]: ts=2026-03-08T23:04:00.956Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002032465s 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: cluster 2026-03-08T23:03:59.627991+0000 mgr.y (mgr.24419) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: cluster 2026-03-08T23:03:59.627991+0000 mgr.y (mgr.24419) 36 : cluster [DBG] pgmap v13: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.269228+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.269228+0000 mon.a (mon.0) 772 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.275115+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.275115+0000 mon.a (mon.0) 773 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.282269+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.282269+0000 mon.a (mon.0) 774 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.289469+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.289469+0000 mon.a (mon.0) 775 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.302025+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.302025+0000 mon.c (mon.2) 43 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.864801+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.106:0/82754349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:00 vm06 bash[27746]: audit 2026-03-08T23:04:00.864801+0000 mon.a (mon.0) 776 : audit [DBG] from='client.? 192.168.123.106:0/82754349' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.185281584Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=2.046231ms 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.1865052Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.186533343Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=28.643µs 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.188014649Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.18988397Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=1.870364ms 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.190927179Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.191393219Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=466.22µs 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.192496099Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.194427818Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=1.931849ms 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.195556275Z level=info msg="Executing migration" id=create_ngalert_configuration_table 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.19592331Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=367.045µs 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.197362138Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.197822799Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=460.711µs 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.198944183Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 2026-03-08T23:04:01.438 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.200755075Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=1.810772ms 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.201964084Z level=info msg="Executing migration" id="create provenance_type table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.20232585Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=361.836µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.203816624Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.204282775Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=466.001µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.205422153Z level=info msg="Executing migration" id="create alert_image table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.205793818Z level=info msg="Migration successfully executed" id="create alert_image table" duration=371.684µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.206937964Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.207397152Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=459.218µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.208972795Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.209027557Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=44.984µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.210237346Z level=info msg="Executing migration" id=create_alert_configuration_history_table 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.210711772Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=474.156µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.211894762Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.212351786Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=458.125µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.21378883Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.213983824Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.214905125Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.215178034Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=272.769µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.215969673Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.216431045Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=461.15µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.217750218Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.21969401Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=1.943852ms 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.220583481Z level=info msg="Executing migration" id="create library_element table v1" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.221105717Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=521.194µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.222170916Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.222656142Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=485.195µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.22400393Z level=info msg="Executing migration" id="create library_element_connection table v1" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.22438394Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=379.829µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.22547626Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.225971815Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=495.555µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.228800088Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.229382646Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=582.418µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.230834898Z level=info msg="Executing migration" id="increase max description length to 2048" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.230850518Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=16.19µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.232097668Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.232171795Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=73.626µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.23319676Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.233445775Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=248.616µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.234630698Z level=info msg="Executing migration" id="create data_keys table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.235161771Z level=info msg="Migration successfully executed" id="create data_keys table" duration=532.135µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.236686047Z level=info msg="Executing migration" id="create secrets table" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.23716913Z level=info msg="Migration successfully executed" id="create secrets table" duration=483.234µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.238319758Z level=info msg="Executing migration" id="rename data_keys name column to id" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.248131004Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=9.792661ms 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.249272586Z level=info msg="Executing migration" id="add name column into data_keys" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.251428233Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=2.155326ms 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.252912125Z level=info msg="Executing migration" id="copy data_keys id column values into name" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.253020808Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=97.763µs 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.254157761Z level=info msg="Executing migration" id="rename data_keys name column to label" 2026-03-08T23:04:01.439 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.268459665Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=14.298958ms 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.270654513Z level=info msg="Executing migration" id="rename data_keys id column back to name" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.281507656Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=10.853823ms 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.283015312Z level=info msg="Executing migration" id="create kv_store table v1" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.28364082Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=625.148µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.285250316Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.285953209Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=702.773µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.287573285Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.287871522Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=298.919µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.288953363Z level=info msg="Executing migration" id="create permission table" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.289520732Z level=info msg="Migration successfully executed" id="create permission table" duration=567.039µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.291258357Z level=info msg="Executing migration" id="add unique index permission.role_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.292119436Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=864.765µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.293530862Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.294157123Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=626.23µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.295409221Z level=info msg="Executing migration" id="create role table" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.295940082Z level=info msg="Migration successfully executed" id="create role table" duration=529.088µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.297524081Z level=info msg="Executing migration" id="add column display_name" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.299885081Z level=info msg="Migration successfully executed" id="add column display_name" duration=2.360799ms 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.301033125Z level=info msg="Executing migration" id="add column group_name" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.30349338Z level=info msg="Migration successfully executed" id="add column group_name" duration=2.459704ms 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.304795974Z level=info msg="Executing migration" id="add index role.org_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.305389812Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=593.879µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.30666828Z level=info msg="Executing migration" id="add unique index role_org_id_name" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.307273521Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=603.427µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.309996427Z level=info msg="Executing migration" id="add index role_org_id_uid" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.310708146Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=711.86µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.312234167Z level=info msg="Executing migration" id="create team role table" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.312933523Z level=info msg="Migration successfully executed" id="create team role table" duration=700.569µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.314423405Z level=info msg="Executing migration" id="add index team_role.org_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.315109717Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=686.422µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.316757806Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.317448435Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=690.458µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.318816962Z level=info msg="Executing migration" id="add index team_role.team_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.319416131Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=598.999µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.3209036Z level=info msg="Executing migration" id="create user role table" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.321412009Z level=info msg="Migration successfully executed" id="create user role table" duration=508.469µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.322753203Z level=info msg="Executing migration" id="add index user_role.org_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.323345049Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=591.845µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.324590406Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.325220603Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=630.126µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.326754207Z level=info msg="Executing migration" id="add index user_role.user_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.32738197Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=628.064µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.328632295Z level=info msg="Executing migration" id="create builtin role table" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.329182242Z level=info msg="Migration successfully executed" id="create builtin role table" duration=549.576µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.330459689Z level=info msg="Executing migration" id="add index builtin_role.role_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.331030795Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=570.986µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.332267674Z level=info msg="Executing migration" id="add index builtin_role.name" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.332823854Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=556.25µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.334307916Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.336774833Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=2.466698ms 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.338035278Z level=info msg="Executing migration" id="add index builtin_role.org_id" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.338590956Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=555.698µs 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.339902796Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 2026-03-08T23:04:01.440 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.340463854Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=561.208µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.342014951Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.342556452Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=541.862µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.343550499Z level=info msg="Executing migration" id="add unique index role.uid" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.344094565Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=544.066µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.345140158Z level=info msg="Executing migration" id="create seed assignment table" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.345594797Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=454.579µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.347189436Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.347749702Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=560.206µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.349034111Z level=info msg="Executing migration" id="add column hidden to role table" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.351591578Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=2.557247ms 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.352891897Z level=info msg="Executing migration" id="permission kind migration" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.356071796Z level=info msg="Migration successfully executed" id="permission kind migration" duration=3.18002ms 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.357821375Z level=info msg="Executing migration" id="permission attribute migration" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.360221037Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=2.39858ms 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.361492692Z level=info msg="Executing migration" id="permission identifier migration" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.363930014Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=2.437283ms 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.36498774Z level=info msg="Executing migration" id="add permission identifier index" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.365671667Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=683.797µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.367292585Z level=info msg="Executing migration" id="add permission action scope role_id index" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.367978165Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=685.72µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.369288542Z level=info msg="Executing migration" id="remove permission role_id action scope index" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.369862926Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=574.373µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.371065031Z level=info msg="Executing migration" id="create query_history table v1" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.371577978Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=512.737µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.372798198Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.37336182Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=563.432µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.374608439Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.374800057Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=191.467µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.375869284Z level=info msg="Executing migration" id="rbac disabled migrator" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.375900523Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=31.81µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.37725389Z level=info msg="Executing migration" id="teams permissions migration" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.377602562Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=348.781µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.378794617Z level=info msg="Executing migration" id="dashboard permissions" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.37915473Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=361.676µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.380102531Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.380570966Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=468.625µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.381903103Z level=info msg="Executing migration" id="drop managed folder create actions" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.38216301Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=259.095µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.383530343Z level=info msg="Executing migration" id="alerting notification permissions" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.383914942Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=384.518µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.384978298Z level=info msg="Executing migration" id="create query_history_star table v1" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.385439479Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=462.694µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.386640793Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.387172085Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=531.081µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.388710658Z level=info msg="Executing migration" id="add column org_id in query_history_star" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.391308692Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=2.597732ms 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.392504756Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.392597799Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=93.345µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.393862171Z level=info msg="Executing migration" id="create correlation table v1" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.394417438Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=556.049µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.39605633Z level=info msg="Executing migration" id="add index correlations.uid" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.396582612Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=526.313µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.397757106Z level=info msg="Executing migration" id="add index correlations.source_uid" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.398269552Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=512.467µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.399431694Z level=info msg="Executing migration" id="add correlation config column" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.40199942Z level=info msg="Migration successfully executed" id="add correlation config column" duration=2.567055ms 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.403180696Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.403679498Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=499.824µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.404895821Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.405434977Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=538.895µs 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.406534451Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 2026-03-08T23:04:01.441 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.413503587Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=6.968786ms 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.414929922Z level=info msg="Executing migration" id="create correlation v2" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.415484887Z level=info msg="Migration successfully executed" id="create correlation v2" duration=554.706µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.416416859Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.416911292Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=494.353µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.418065228Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.418618772Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=554.506µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.420122861Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.420626652Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=503.951µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.421730544Z level=info msg="Executing migration" id="copy correlation v1 to v2" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.421934234Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=203.882µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.42289591Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.423326726Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=430.635µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.424548027Z level=info msg="Executing migration" id="add provisioning column" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.42702893Z level=info msg="Migration successfully executed" id="add provisioning column" duration=2.480623ms 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.428241434Z level=info msg="Executing migration" id="create entity_events table" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.428749624Z level=info msg="Migration successfully executed" id="create entity_events table" duration=508.12µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.43024161Z level=info msg="Executing migration" id="create dashboard public config v1" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.43085649Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=614.709µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.432056941Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.432322116Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.433469339Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.43372173Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.434686833Z level=info msg="Executing migration" id="Drop old dashboard public config table" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.435123048Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=436.085µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.436433536Z level=info msg="Executing migration" id="recreate dashboard public config v1" 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.436928981Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=495.415µs 2026-03-08T23:04:01.442 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.438090782Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.440196945Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.106193ms 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.441780193Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.442349686Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=569.724µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.443553796Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.444095447Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=541.912µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.445089664Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.445640333Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=550.649µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.447104377Z level=info msg="Executing migration" id="Drop public config table" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.447567202Z level=info msg="Migration successfully executed" id="Drop public config table" duration=464.157µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.448496808Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.449040794Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=543.735µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.450442953Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.450990305Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=547.283µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.451867894Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.452419946Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=551.972µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.453443939Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.454014173Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=570.265µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.455344228Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.463455928Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=8.11159ms 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.464553698Z level=info msg="Executing migration" id="add annotations_enabled column" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.4670385Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=2.484692ms 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.468169062Z level=info msg="Executing migration" id="add time_selection_enabled column" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.470997965Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=2.828763ms 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.472508567Z level=info msg="Executing migration" id="delete orphaned public dashboards" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.472690356Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=181.409µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.47377374Z level=info msg="Executing migration" id="add share column" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.476533084Z level=info msg="Migration successfully executed" id="add share column" duration=2.759113ms 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.477731432Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.477888806Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=157.554µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.47884409Z level=info msg="Executing migration" id="create file table" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.479285394Z level=info msg="Migration successfully executed" id="create file table" duration=441.134µs 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.480711137Z level=info msg="Executing migration" id="file table idx: path natural pk" 2026-03-08T23:04:01.693 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.481220649Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=508.419µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.482381027Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.482900878Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=519.811µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.484393856Z level=info msg="Executing migration" id="create file_meta table" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.484782462Z level=info msg="Migration successfully executed" id="create file_meta table" duration=388.435µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.486223494Z level=info msg="Executing migration" id="file table idx: path key" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.486724849Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=501.305µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.487942896Z level=info msg="Executing migration" id="set path collation in file table" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.488003068Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=61.555µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.488921283Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.488987747Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=66.945µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.490109351Z level=info msg="Executing migration" id="managed permissions migration" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.490390848Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=281.287µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.491669385Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.491836338Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=166.772µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.492693949Z level=info msg="Executing migration" id="RBAC action name migrator" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.493414765Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=720.706µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.494481338Z level=info msg="Executing migration" id="Add UID column to playlist" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.497324398Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=2.842919ms 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.498735453Z level=info msg="Executing migration" id="Update uid column values in playlist" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.498879272Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=144.11µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.499803839Z level=info msg="Executing migration" id="Add index for uid in playlist" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.500343918Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=540.119µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.501451047Z level=info msg="Executing migration" id="update group index for alert rules" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.501687097Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=235.891µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.502782814Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.503016811Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=234.148µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.504235567Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.504501744Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=266.277µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.505500418Z level=info msg="Executing migration" id="add action column to seed_assignment" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.508127145Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=2.626605ms 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.509101395Z level=info msg="Executing migration" id="add scope column to seed_assignment" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.511749842Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=2.648757ms 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.513127876Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.5136012Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=473.445µs 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.51466141Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.540457733Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=25.795982ms 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.541936064Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 2026-03-08T23:04:01.694 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.542536405Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=599.991µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.544010198Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.544640284Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=630.188µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.545862257Z level=info msg="Executing migration" id="add primary key to seed_assigment" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.553958148Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=8.095872ms 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.555556044Z level=info msg="Executing migration" id="add origin column to seed_assignment" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.558904037Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=3.347553ms 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.560177625Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.560390042Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=212.217µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.561601164Z level=info msg="Executing migration" id="prevent seeding OnCall access" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.561749913Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=148.768µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.562723391Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.562888799Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=165.498µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.564494969Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.564655268Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=160.079µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.56579062Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.565933596Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=142.797µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.566987885Z level=info msg="Executing migration" id="create folder table" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.567480826Z level=info msg="Migration successfully executed" id="create folder table" duration=492.781µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.568447291Z level=info msg="Executing migration" id="Add index for parent_uid" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.569122383Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=675.092µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.570778064Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.57140728Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=629.095µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.57262248Z level=info msg="Executing migration" id="Update folder title length" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.57269797Z level=info msg="Migration successfully executed" id="Update folder title length" duration=75.79µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.573790411Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.574342002Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=551.541µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.575680532Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.576171278Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=490.836µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.577105333Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.577641965Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=535.729µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.579097274Z level=info msg="Executing migration" id="Sync dashboard and folder table" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.579385341Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=286.645µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.580242873Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.58042848Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=185.868µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.581472069Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.581966412Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=494.192µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.582866724Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.583407213Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=540.51µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.584578181Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.585171479Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=593.147µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.586208937Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.586775635Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=566.558µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.587969896Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.588457746Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=487.951µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.589529368Z level=info msg="Executing migration" id="create anon_device table" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.589960243Z level=info msg="Migration successfully executed" id="create anon_device table" duration=430.765µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.590877436Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.591431241Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=553.945µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.592848198Z level=info msg="Executing migration" id="add index anon_device.updated_at" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.593369641Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=521.513µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.594871357Z level=info msg="Executing migration" id="create signing_key table" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.595409331Z level=info msg="Migration successfully executed" id="create signing_key table" duration=538.025µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.596628238Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.597320131Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=691.552µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.598821764Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.599382011Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=560.568µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.600349318Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.600581712Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=233.816µs 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.601618559Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 2026-03-08T23:04:01.695 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.604369687Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=2.750767ms 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.607221814Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.609545575Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=2.324121ms 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.613065941Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.613600318Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=534.387µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.614930723Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.615444583Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=513.86µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.616650556Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.617173353Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=523.258µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.618224525Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.61880014Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=574.553µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.619732862Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.620428922Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=696.04µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.621799242Z level=info msg="Executing migration" id="create sso_setting table" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.622759766Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=956.317µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.624091454Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.624595365Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=506.115µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.626018894Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.626219408Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=200.674µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.627341584Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.627416574Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=75.231µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.628571471Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.631168152Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=2.5963ms 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.632298553Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.634826454Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=2.52738ms 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.636408679Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.636631797Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=223.258µs 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=migrator t=2026-03-08T23:04:01.63776935Z level=info msg="migrations completed" performed=547 skipped=0 duration=1.207119133s 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=sqlstore t=2026-03-08T23:04:01.638388979Z level=info msg="Created default organization" 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=secrets t=2026-03-08T23:04:01.639443859Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 2026-03-08T23:04:01.696 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=plugin.store t=2026-03-08T23:04:01.648629836Z level=info msg="Loading plugins..." 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=local.finder t=2026-03-08T23:04:01.692552345Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=plugin.store t=2026-03-08T23:04:01.692618329Z level=info msg="Plugins loaded" count=55 duration=43.988723ms 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=query_data t=2026-03-08T23:04:01.694727156Z level=info msg="Query Service initialization" 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=live.push_http t=2026-03-08T23:04:01.696338878Z level=info msg="Live Push Gateway initialization" 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.migration t=2026-03-08T23:04:01.704469914Z level=info msg=Starting 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.migration t=2026-03-08T23:04:01.704814408Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.migration orgID=1 t=2026-03-08T23:04:01.705154012Z level=info msg="Migrating alerts for organisation" 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.migration orgID=1 t=2026-03-08T23:04:01.705547678Z level=info msg="Alerts found to migrate" alerts=0 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.migration t=2026-03-08T23:04:01.706362749Z level=info msg="Completed alerting migration" 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.state.manager t=2026-03-08T23:04:01.715233089Z level=info msg="Running in alternative execution of Error/NoData mode" 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=infra.usagestats.collector t=2026-03-08T23:04:01.716265166Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=provisioning.datasources t=2026-03-08T23:04:01.717405495Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596 2026-03-08T23:04:02.050 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=provisioning.datasources t=2026-03-08T23:04:01.722834375Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=provisioning.alerting t=2026-03-08T23:04:01.728735157Z level=info msg="starting to provision alerting" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=provisioning.alerting t=2026-03-08T23:04:01.728744124Z level=info msg="finished to provision alerting" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=grafanaStorageLogger t=2026-03-08T23:04:01.729071195Z level=info msg="Storage starting" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=http.server t=2026-03-08T23:04:01.730168915Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=http.server t=2026-03-08T23:04:01.73041773Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=https subUrl= socket= 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.state.manager t=2026-03-08T23:04:01.730604439Z level=info msg="Warming state cache for startup" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.state.manager t=2026-03-08T23:04:01.731930005Z level=info msg="State cache has been initialized" states=0 duration=1.324925ms 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=provisioning.dashboard t=2026-03-08T23:04:01.732643708Z level=info msg="starting to provision dashboards" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=sqlstore.transactions t=2026-03-08T23:04:01.74743264Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.multiorg.alertmanager t=2026-03-08T23:04:01.748347239Z level=info msg="Starting MultiOrg Alertmanager" 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ngalert.scheduler t=2026-03-08T23:04:01.748360764Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=ticker t=2026-03-08T23:04:01.748857982Z level=info msg=starting first_tick=2026-03-08T23:04:10Z 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=plugins.update.checker t=2026-03-08T23:04:01.804217373Z level=info msg="Update check succeeded" duration=56.681651ms 2026-03-08T23:04:02.051 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:01 vm11 bash[51186]: logger=provisioning.dashboard t=2026-03-08T23:04:01.87300793Z level=info msg="finished to provision dashboards" 2026-03-08T23:04:02.307 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:04:02.307 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:02 vm11 bash[51186]: logger=grafana-apiserver t=2026-03-08T23:04:02.054537452Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 2026-03-08T23:04:02.307 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:02 vm11 bash[51186]: logger=grafana-apiserver t=2026-03-08T23:04:02.055015354Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:03 vm06 bash[20625]: cluster 2026-03-08T23:04:01.628571+0000 mgr.y (mgr.24419) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:03 vm06 bash[20625]: cluster 2026-03-08T23:04:01.628571+0000 mgr.y (mgr.24419) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:03 vm06 bash[20625]: audit 2026-03-08T23:04:02.050596+0000 mgr.y (mgr.24419) 38 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:03 vm06 bash[20625]: audit 2026-03-08T23:04:02.050596+0000 mgr.y (mgr.24419) 38 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:03 vm06 bash[20625]: audit 2026-03-08T23:04:02.731674+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:03 vm06 bash[20625]: audit 2026-03-08T23:04:02.731674+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:03 vm06 bash[27746]: cluster 2026-03-08T23:04:01.628571+0000 mgr.y (mgr.24419) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:03 vm06 bash[27746]: cluster 2026-03-08T23:04:01.628571+0000 mgr.y (mgr.24419) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:03 vm06 bash[27746]: audit 2026-03-08T23:04:02.050596+0000 mgr.y (mgr.24419) 38 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:03 vm06 bash[27746]: audit 2026-03-08T23:04:02.050596+0000 mgr.y (mgr.24419) 38 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:03 vm06 bash[27746]: audit 2026-03-08T23:04:02.731674+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:03 vm06 bash[27746]: audit 2026-03-08T23:04:02.731674+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:03.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:03 vm11 bash[23232]: cluster 2026-03-08T23:04:01.628571+0000 mgr.y (mgr.24419) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:03.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:03 vm11 bash[23232]: cluster 2026-03-08T23:04:01.628571+0000 mgr.y (mgr.24419) 37 : cluster [DBG] pgmap v14: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:03.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:03 vm11 bash[23232]: audit 2026-03-08T23:04:02.050596+0000 mgr.y (mgr.24419) 38 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:03.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:03 vm11 bash[23232]: audit 2026-03-08T23:04:02.050596+0000 mgr.y (mgr.24419) 38 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:03.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:03 vm11 bash[23232]: audit 2026-03-08T23:04:02.731674+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:03.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:03 vm11 bash[23232]: audit 2026-03-08T23:04:02.731674+0000 mon.a (mon.0) 777 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:04.592 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:04.957 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:04:04.957 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":66,"fsid":"e2eb96e6-1b41-11f1-83e5-75f1b5373d30","created":"2026-03-08T22:56:50.043169+0000","modified":"2026-03-08T23:03:37.586113+0000","last_up_change":"2026-03-08T23:02:41.969578+0000","last_in_change":"2026-03-08T23:02:22.838818+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-08T22:59:49.511510+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-08T23:03:03.094283+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-08T23:03:05.405958+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-08T23:03:06.757962+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"64","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":64,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-08T23:03:07.157765+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-08T23:03:09.333909+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"f584135b-773d-4be0-b5f4-b849576faa2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6801","nonce":1756339851}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":1756339851}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":1756339851}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6803","nonce":1756339851}]},"public_addr":"192.168.123.106:6801/1756339851","cluster_addr":"192.168.123.106:6802/1756339851","heartbeat_back_addr":"192.168.123.106:6804/1756339851","heartbeat_front_addr":"192.168.123.106:6803/1756339851","state":["exists","up"]},{"osd":1,"uuid":"2022422b-3e71-4162-b64b-3d25e2ad079e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6805","nonce":2598119140}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":2598119140}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":2598119140}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6807","nonce":2598119140}]},"public_addr":"192.168.123.106:6805/2598119140","cluster_addr":"192.168.123.106:6806/2598119140","heartbeat_back_addr":"192.168.123.106:6808/2598119140","heartbeat_front_addr":"192.168.123.106:6807/2598119140","state":["exists","up"]},{"osd":2,"uuid":"127338cf-5856-4d11-8a9b-9cbd216d8507","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6809","nonce":2508962009}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":2508962009}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":2508962009}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6811","nonce":2508962009}]},"public_addr":"192.168.123.106:6809/2508962009","cluster_addr":"192.168.123.106:6810/2508962009","heartbeat_back_addr":"192.168.123.106:6812/2508962009","heartbeat_front_addr":"192.168.123.106:6811/2508962009","state":["exists","up"]},{"osd":3,"uuid":"19da1389-a7b0-483c-b2d4-8be50f26c1c4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6813","nonce":3847325262}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":3847325262}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":3847325262}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6815","nonce":3847325262}]},"public_addr":"192.168.123.106:6813/3847325262","cluster_addr":"192.168.123.106:6814/3847325262","heartbeat_back_addr":"192.168.123.106:6816/3847325262","heartbeat_front_addr":"192.168.123.106:6815/3847325262","state":["exists","up"]},{"osd":4,"uuid":"2b8b0ad5-79bc-4b4c-a515-bc6c029f416f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6800","nonce":1718317342}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6801","nonce":1718317342}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6803","nonce":1718317342}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6802","nonce":1718317342}]},"public_addr":"192.168.123.111:6800/1718317342","cluster_addr":"192.168.123.111:6801/1718317342","heartbeat_back_addr":"192.168.123.111:6803/1718317342","heartbeat_front_addr":"192.168.123.111:6802/1718317342","state":["exists","up"]},{"osd":5,"uuid":"ebf4133c-ae3a-4afe-9e9e-4c894f65f53e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6804","nonce":3102108212}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6805","nonce":3102108212}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6807","nonce":3102108212}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6806","nonce":3102108212}]},"public_addr":"192.168.123.111:6804/3102108212","cluster_addr":"192.168.123.111:6805/3102108212","heartbeat_back_addr":"192.168.123.111:6807/3102108212","heartbeat_front_addr":"192.168.123.111:6806/3102108212","state":["exists","up"]},{"osd":6,"uuid":"1359b0d9-00db-474d-93f0-8246b9a8fa82","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6808","nonce":3646507391}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6809","nonce":3646507391}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6811","nonce":3646507391}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6810","nonce":3646507391}]},"public_addr":"192.168.123.111:6808/3646507391","cluster_addr":"192.168.123.111:6809/3646507391","heartbeat_back_addr":"192.168.123.111:6811/3646507391","heartbeat_front_addr":"192.168.123.111:6810/3646507391","state":["exists","up"]},{"osd":7,"uuid":"29b40029-6843-47e4-b83e-af6cefd3e500","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6812","nonce":5515467}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6813","nonce":5515467}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6815","nonce":5515467}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6814","nonce":5515467}]},"public_addr":"192.168.123.111:6812/5515467","cluster_addr":"192.168.123.111:6813/5515467","heartbeat_back_addr":"192.168.123.111:6815/5515467","heartbeat_front_addr":"192.168.123.111:6814/5515467","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:58:36.480874+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:59:10.436552+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:59:44.575086+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:00:19.486922+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:00:54.034338+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:01:29.981522+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:02:03.219993+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:02:39.212275+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.106:0/3526356403":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/2523815248":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/1915233046":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/1313816001":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/2787078610":"2026-03-09T22:57:01.174562+0000","192.168.123.106:6800/1101559289":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/1991674123":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/1740051211":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/410680846":"2026-03-09T22:57:11.437532+0000","192.168.123.106:6800/1580927884":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/472491601":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/1125702742":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/100617998":"2026-03-09T23:03:37.586080+0000","192.168.123.106:6800/3890676051":"2026-03-09T23:03:37.586080+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-08T23:04:05.037 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-08T23:04:05.038 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd dump --format=json 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:05 vm06 bash[20625]: cluster 2026-03-08T23:04:03.628883+0000 mgr.y (mgr.24419) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:05 vm06 bash[20625]: cluster 2026-03-08T23:04:03.628883+0000 mgr.y (mgr.24419) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:05 vm06 bash[20625]: audit 2026-03-08T23:04:04.956755+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.106:0/351688934' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:05 vm06 bash[20625]: audit 2026-03-08T23:04:04.956755+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.106:0/351688934' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:05 vm06 bash[27746]: cluster 2026-03-08T23:04:03.628883+0000 mgr.y (mgr.24419) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:05 vm06 bash[27746]: cluster 2026-03-08T23:04:03.628883+0000 mgr.y (mgr.24419) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:05 vm06 bash[27746]: audit 2026-03-08T23:04:04.956755+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.106:0/351688934' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:05 vm06 bash[27746]: audit 2026-03-08T23:04:04.956755+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.106:0/351688934' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:05 vm11 bash[23232]: cluster 2026-03-08T23:04:03.628883+0000 mgr.y (mgr.24419) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:05 vm11 bash[23232]: cluster 2026-03-08T23:04:03.628883+0000 mgr.y (mgr.24419) 39 : cluster [DBG] pgmap v15: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:05 vm11 bash[23232]: audit 2026-03-08T23:04:04.956755+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.106:0/351688934' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:05.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:05 vm11 bash[23232]: audit 2026-03-08T23:04:04.956755+0000 mon.a (mon.0) 778 : audit [DBG] from='client.? 192.168.123.106:0/351688934' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.100297+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.100297+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.110409+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.110409+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.675840+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.675840+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.682856+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.682856+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.686107+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:06.215 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.686107+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.687046+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.687046+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.690983+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[20625]: audit 2026-03-08T23:04:05.690983+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.100297+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.100297+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.110409+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.110409+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.675840+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.675840+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.682856+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.682856+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.686107+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.686107+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.687046+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.687046+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.690983+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:06 vm06 bash[27746]: audit 2026-03-08T23:04:05.690983+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.216 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 systemd[1]: Stopping Ceph alertmanager.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:04:06.216 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[55553]: ts=2026-03-08T23:04:06.213Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56293]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-alertmanager-a 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@alertmanager.a.service: Deactivated successfully. 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 systemd[1]: Stopped Ceph alertmanager.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 systemd[1]: Started Ceph alertmanager.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.426Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)" 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.426Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)" 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.428Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.123.106 port=9094 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.429Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.446Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.447Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.450Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9093 2026-03-08T23:04:06.529 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:06 vm06 bash[56369]: ts=2026-03-08T23:04:06.450Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9093 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.100297+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.100297+0000 mon.a (mon.0) 779 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.110409+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.110409+0000 mon.a (mon.0) 780 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.675840+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.675840+0000 mon.a (mon.0) 781 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.682856+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.682856+0000 mon.a (mon.0) 782 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.686107+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.686107+0000 mon.c (mon.2) 44 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.687046+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.687046+0000 mon.c (mon.2) 45 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.690983+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:06.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:06 vm11 bash[23232]: audit 2026-03-08T23:04:05.690983+0000 mon.a (mon.0) 783 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: cluster 2026-03-08T23:04:05.629327+0000 mgr.y (mgr.24419) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: cluster 2026-03-08T23:04:05.629327+0000 mgr.y (mgr.24419) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: cephadm 2026-03-08T23:04:05.703473+0000 mgr.y (mgr.24419) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: cephadm 2026-03-08T23:04:05.703473+0000 mgr.y (mgr.24419) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: cephadm 2026-03-08T23:04:05.706706+0000 mgr.y (mgr.24419) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm06 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: cephadm 2026-03-08T23:04:05.706706+0000 mgr.y (mgr.24419) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm06 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: audit 2026-03-08T23:04:06.322847+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: audit 2026-03-08T23:04:06.322847+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: audit 2026-03-08T23:04:06.331120+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.295 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:07 vm11 bash[23232]: audit 2026-03-08T23:04:06.331120+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.295 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 systemd[1]: Stopping Ceph prometheus.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: cluster 2026-03-08T23:04:05.629327+0000 mgr.y (mgr.24419) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: cluster 2026-03-08T23:04:05.629327+0000 mgr.y (mgr.24419) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: cephadm 2026-03-08T23:04:05.703473+0000 mgr.y (mgr.24419) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: cephadm 2026-03-08T23:04:05.703473+0000 mgr.y (mgr.24419) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: cephadm 2026-03-08T23:04:05.706706+0000 mgr.y (mgr.24419) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm06 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: cephadm 2026-03-08T23:04:05.706706+0000 mgr.y (mgr.24419) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm06 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: audit 2026-03-08T23:04:06.322847+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: audit 2026-03-08T23:04:06.322847+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: audit 2026-03-08T23:04:06.331120+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:07 vm06 bash[20625]: audit 2026-03-08T23:04:06.331120+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: cluster 2026-03-08T23:04:05.629327+0000 mgr.y (mgr.24419) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: cluster 2026-03-08T23:04:05.629327+0000 mgr.y (mgr.24419) 40 : cluster [DBG] pgmap v16: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: cephadm 2026-03-08T23:04:05.703473+0000 mgr.y (mgr.24419) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: cephadm 2026-03-08T23:04:05.703473+0000 mgr.y (mgr.24419) 41 : cephadm [INF] Reconfiguring alertmanager.a (dependencies changed)... 2026-03-08T23:04:07.427 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: cephadm 2026-03-08T23:04:05.706706+0000 mgr.y (mgr.24419) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm06 2026-03-08T23:04:07.428 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: cephadm 2026-03-08T23:04:05.706706+0000 mgr.y (mgr.24419) 42 : cephadm [INF] Reconfiguring daemon alertmanager.a on vm06 2026-03-08T23:04:07.428 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: audit 2026-03-08T23:04:06.322847+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.428 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: audit 2026-03-08T23:04:06.322847+0000 mon.a (mon.0) 784 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.428 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: audit 2026-03-08T23:04:06.331120+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.428 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:07 vm06 bash[27746]: audit 2026-03-08T23:04:06.331120+0000 mon.a (mon.0) 785 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.293Z caller=main.go:964 level=warn msg="Received SIGTERM, exiting gracefully..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=main.go:988 level=info msg="Stopping scrape discovery manager..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=main.go:1002 level=info msg="Stopping notify discovery manager..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=manager.go:177 level=info component="rule manager" msg="Stopping rule manager..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=main.go:984 level=info msg="Scrape discovery manager stopped" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=main.go:998 level=info msg="Notify discovery manager stopped" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=manager.go:187 level=info component="rule manager" msg="Rule manager stopped" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=main.go:1039 level=info msg="Stopping scrape manager..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.294Z caller=main.go:1031 level=info msg="Scrape manager stopped" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.295Z caller=notifier.go:618 level=info component=notifier msg="Stopping notification manager..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.295Z caller=main.go:1261 level=info msg="Notifier manager stopped" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[49943]: ts=2026-03-08T23:04:07.295Z caller=main.go:1273 level=info msg="See you next time!" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51745]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-prometheus-a 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@prometheus.a.service: Deactivated successfully. 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 systemd[1]: Stopped Ceph prometheus.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 systemd[1]: Started Ceph prometheus.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.492Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.492Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.492Z caller=main.go:623 level=info host_details="(Linux 5.15.0-1092-kvm #97-Ubuntu SMP Fri Jan 23 15:00:24 UTC 2026 x86_64 vm11 (none))" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.492Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.492Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.495Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=:9095 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.496Z caller=main.go:1129 level=info msg="Starting TSDB ..." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.498Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9095 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.498Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9095 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.500Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.500Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=982ns 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.500Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.500Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=1 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.500Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=1 maxSegment=1 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.500Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=17.723µs wal_replay_duration=286.785µs wbl_replay_duration=130ns total_replay_duration=315.079µs 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.501Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.501Z caller=main.go:1153 level=info msg="TSDB started" 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.501Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.530Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=29.287726ms db_storage=1.473µs remote_storage=1.273µs web_handler=390ns query_engine=1.112µs scrape=12.306597ms scrape_sd=155.179µs notify=9.277µs notify_sd=6.081µs rules=16.567827ms tracing=6.593µs 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.531Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 2026-03-08T23:04:07.558 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:04:07 vm11 bash[51823]: ts=2026-03-08T23:04:07.531Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 2026-03-08T23:04:07.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STOPPING 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STOPPED 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STARTING 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Serving on http://:::9283 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STARTED 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STOPPING 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STOPPED 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:07 vm06 bash[20883]: [08/Mar/2026:23:04:07] ENGINE Bus STARTING 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Serving on http://:::9283 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Bus STARTED 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Bus STOPPING 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Bus STOPPED 2026-03-08T23:04:08.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Bus STARTING 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: cephadm 2026-03-08T23:04:06.335400+0000 mgr.y (mgr.24419) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: cephadm 2026-03-08T23:04:06.335400+0000 mgr.y (mgr.24419) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: cephadm 2026-03-08T23:04:06.517318+0000 mgr.y (mgr.24419) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: cephadm 2026-03-08T23:04:06.517318+0000 mgr.y (mgr.24419) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.407431+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.407431+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.415098+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.415098+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.419110+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.419110+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.420618+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.420618+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.424930+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.424930+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.434149+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.434149+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.435439+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.435439+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.440994+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.440994+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.449728+0000 mon.c (mon.2) 50 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.449728+0000 mon.c (mon.2) 50 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.451025+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.451025+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.456128+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.456128+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.491078+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.491078+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.729914+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[20625]: audit 2026-03-08T23:04:07.729914+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: cephadm 2026-03-08T23:04:06.335400+0000 mgr.y (mgr.24419) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-08T23:04:08.132 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: cephadm 2026-03-08T23:04:06.335400+0000 mgr.y (mgr.24419) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: cephadm 2026-03-08T23:04:06.517318+0000 mgr.y (mgr.24419) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: cephadm 2026-03-08T23:04:06.517318+0000 mgr.y (mgr.24419) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.407431+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.407431+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.415098+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.415098+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.419110+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.419110+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.420618+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.420618+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.424930+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.424930+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.434149+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.434149+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.435439+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.435439+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:08.133 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.440994+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.428 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Serving on http://:::9283 2026-03-08T23:04:08.428 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:08 vm06 bash[20883]: [08/Mar/2026:23:04:08] ENGINE Bus STARTED 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.440994+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.449728+0000 mon.c (mon.2) 50 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.449728+0000 mon.c (mon.2) 50 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.451025+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.451025+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.456128+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.456128+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.491078+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.491078+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.729914+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:08.429 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:08 vm06 bash[27746]: audit 2026-03-08T23:04:07.729914+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: cephadm 2026-03-08T23:04:06.335400+0000 mgr.y (mgr.24419) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: cephadm 2026-03-08T23:04:06.335400+0000 mgr.y (mgr.24419) 43 : cephadm [INF] Reconfiguring prometheus.a (dependencies changed)... 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: cephadm 2026-03-08T23:04:06.517318+0000 mgr.y (mgr.24419) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: cephadm 2026-03-08T23:04:06.517318+0000 mgr.y (mgr.24419) 44 : cephadm [INF] Reconfiguring daemon prometheus.a on vm11 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.407431+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.407431+0000 mon.a (mon.0) 786 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.415098+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.415098+0000 mon.a (mon.0) 787 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.419110+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.419110+0000 mon.c (mon.2) 46 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.420618+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.420618+0000 mon.c (mon.2) 47 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.424930+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.424930+0000 mon.a (mon.0) 788 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.434149+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.434149+0000 mon.c (mon.2) 48 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.435439+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.435439+0000 mon.c (mon.2) 49 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.440994+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.440994+0000 mon.a (mon.0) 789 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.449728+0000 mon.c (mon.2) 50 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.449728+0000 mon.c (mon.2) 50 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.451025+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.451025+0000 mon.c (mon.2) 51 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.456128+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.456128+0000 mon.a (mon.0) 790 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.491078+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.491078+0000 mon.c (mon.2) 52 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.729914+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:08 vm11 bash[23232]: audit 2026-03-08T23:04:07.729914+0000 mon.c (mon.2) 53 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:08.779 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:08 vm06 bash[56369]: ts=2026-03-08T23:04:08.429Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000145898s 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.419733+0000 mgr.y (mgr.24419) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.419733+0000 mgr.y (mgr.24419) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.421053+0000 mgr.y (mgr.24419) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.421053+0000 mgr.y (mgr.24419) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.434617+0000 mgr.y (mgr.24419) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.434617+0000 mgr.y (mgr.24419) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.435642+0000 mgr.y (mgr.24419) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.435642+0000 mgr.y (mgr.24419) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.449963+0000 mgr.y (mgr.24419) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.449963+0000 mgr.y (mgr.24419) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.451231+0000 mgr.y (mgr.24419) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: audit 2026-03-08T23:04:07.451231+0000 mgr.y (mgr.24419) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: cluster 2026-03-08T23:04:07.629630+0000 mgr.y (mgr.24419) 51 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:09 vm06 bash[20625]: cluster 2026-03-08T23:04:07.629630+0000 mgr.y (mgr.24419) 51 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.419733+0000 mgr.y (mgr.24419) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.419733+0000 mgr.y (mgr.24419) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.421053+0000 mgr.y (mgr.24419) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.421053+0000 mgr.y (mgr.24419) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.434617+0000 mgr.y (mgr.24419) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.434617+0000 mgr.y (mgr.24419) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.435642+0000 mgr.y (mgr.24419) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.435642+0000 mgr.y (mgr.24419) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.449963+0000 mgr.y (mgr.24419) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.449963+0000 mgr.y (mgr.24419) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.451231+0000 mgr.y (mgr.24419) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: audit 2026-03-08T23:04:07.451231+0000 mgr.y (mgr.24419) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: cluster 2026-03-08T23:04:07.629630+0000 mgr.y (mgr.24419) 51 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:09.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:09 vm06 bash[27746]: cluster 2026-03-08T23:04:07.629630+0000 mgr.y (mgr.24419) 51 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.419733+0000 mgr.y (mgr.24419) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.419733+0000 mgr.y (mgr.24419) 45 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.421053+0000 mgr.y (mgr.24419) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.421053+0000 mgr.y (mgr.24419) 46 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm06.local:9093"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.434617+0000 mgr.y (mgr.24419) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.434617+0000 mgr.y (mgr.24419) 47 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.435642+0000 mgr.y (mgr.24419) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.435642+0000 mgr.y (mgr.24419) 48 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm11.local:3000"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.449963+0000 mgr.y (mgr.24419) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.449963+0000 mgr.y (mgr.24419) 49 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-prometheus-api-host"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.451231+0000 mgr.y (mgr.24419) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: audit 2026-03-08T23:04:07.451231+0000 mgr.y (mgr.24419) 50 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://vm11.local:9095"}]: dispatch 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: cluster 2026-03-08T23:04:07.629630+0000 mgr.y (mgr.24419) 51 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:09 vm11 bash[23232]: cluster 2026-03-08T23:04:07.629630+0000 mgr.y (mgr.24419) 51 : cluster [DBG] pgmap v17: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:10.717 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:11.007 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:04:11.007 INFO:teuthology.orchestra.run.vm06.stdout:{"epoch":66,"fsid":"e2eb96e6-1b41-11f1-83e5-75f1b5373d30","created":"2026-03-08T22:56:50.043169+0000","modified":"2026-03-08T23:03:37.586113+0000","last_up_change":"2026-03-08T23:02:41.969578+0000","last_in_change":"2026-03-08T23:02:22.838818+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":18,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":6,"max_osd":8,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-08T22:59:49.511510+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"22","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":7.8899998664855957,"score_stable":7.8899998664855957,"optimal_score":0.37999999523162842,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":".rgw.root","create_time":"2026-03-08T23:03:03.094283+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"56","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":3,"pool_name":"default.rgw.log","create_time":"2026-03-08T23:03:05.405958+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"58","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.5,"score_stable":1.5,"optimal_score":1,"raw_score_acting":1.5,"raw_score_stable":1.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":4,"pool_name":"datapool","create_time":"2026-03-08T23:03:06.757962+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":3,"pg_placement_num":3,"pg_placement_num_target":3,"pg_num_target":3,"pg_num_pending":3,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"64","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":64,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.6500000953674316,"score_stable":2.6500000953674316,"optimal_score":0.87999999523162842,"raw_score_acting":2.3299999237060547,"raw_score_stable":2.3299999237060547,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":5,"pool_name":"default.rgw.control","create_time":"2026-03-08T23:03:07.157765+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"60","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.25,"score_stable":1.25,"optimal_score":1,"raw_score_acting":1.25,"raw_score_stable":1.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":6,"pool_name":"default.rgw.meta","create_time":"2026-03-08T23:03:09.333909+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":32,"pg_placement_num":32,"pg_placement_num_target":32,"pg_num_target":32,"pg_num_pending":32,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"62","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_autoscale_bias":4},"application_metadata":{"rgw":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.75,"score_stable":1.75,"optimal_score":1,"raw_score_acting":1.75,"raw_score_stable":1.75,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"f584135b-773d-4be0-b5f4-b849576faa2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6801","nonce":1756339851}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6802","nonce":1756339851}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6804","nonce":1756339851}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6803","nonce":1756339851}]},"public_addr":"192.168.123.106:6801/1756339851","cluster_addr":"192.168.123.106:6802/1756339851","heartbeat_back_addr":"192.168.123.106:6804/1756339851","heartbeat_front_addr":"192.168.123.106:6803/1756339851","state":["exists","up"]},{"osd":1,"uuid":"2022422b-3e71-4162-b64b-3d25e2ad079e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6805","nonce":2598119140}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6806","nonce":2598119140}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6808","nonce":2598119140}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6807","nonce":2598119140}]},"public_addr":"192.168.123.106:6805/2598119140","cluster_addr":"192.168.123.106:6806/2598119140","heartbeat_back_addr":"192.168.123.106:6808/2598119140","heartbeat_front_addr":"192.168.123.106:6807/2598119140","state":["exists","up"]},{"osd":2,"uuid":"127338cf-5856-4d11-8a9b-9cbd216d8507","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6809","nonce":2508962009}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6810","nonce":2508962009}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6812","nonce":2508962009}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6811","nonce":2508962009}]},"public_addr":"192.168.123.106:6809/2508962009","cluster_addr":"192.168.123.106:6810/2508962009","heartbeat_back_addr":"192.168.123.106:6812/2508962009","heartbeat_front_addr":"192.168.123.106:6811/2508962009","state":["exists","up"]},{"osd":3,"uuid":"19da1389-a7b0-483c-b2d4-8be50f26c1c4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":26,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6813","nonce":3847325262}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6814","nonce":3847325262}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6816","nonce":3847325262}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.106:6815","nonce":3847325262}]},"public_addr":"192.168.123.106:6813/3847325262","cluster_addr":"192.168.123.106:6814/3847325262","heartbeat_back_addr":"192.168.123.106:6816/3847325262","heartbeat_front_addr":"192.168.123.106:6815/3847325262","state":["exists","up"]},{"osd":4,"uuid":"2b8b0ad5-79bc-4b4c-a515-bc6c029f416f","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":32,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6800","nonce":1718317342}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6801","nonce":1718317342}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6803","nonce":1718317342}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6802","nonce":1718317342}]},"public_addr":"192.168.123.111:6800/1718317342","cluster_addr":"192.168.123.111:6801/1718317342","heartbeat_back_addr":"192.168.123.111:6803/1718317342","heartbeat_front_addr":"192.168.123.111:6802/1718317342","state":["exists","up"]},{"osd":5,"uuid":"ebf4133c-ae3a-4afe-9e9e-4c894f65f53e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":38,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6804","nonce":3102108212}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6805","nonce":3102108212}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6807","nonce":3102108212}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6806","nonce":3102108212}]},"public_addr":"192.168.123.111:6804/3102108212","cluster_addr":"192.168.123.111:6805/3102108212","heartbeat_back_addr":"192.168.123.111:6807/3102108212","heartbeat_front_addr":"192.168.123.111:6806/3102108212","state":["exists","up"]},{"osd":6,"uuid":"1359b0d9-00db-474d-93f0-8246b9a8fa82","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":44,"up_thru":58,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6808","nonce":3646507391}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6809","nonce":3646507391}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6811","nonce":3646507391}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6810","nonce":3646507391}]},"public_addr":"192.168.123.111:6808/3646507391","cluster_addr":"192.168.123.111:6809/3646507391","heartbeat_back_addr":"192.168.123.111:6811/3646507391","heartbeat_front_addr":"192.168.123.111:6810/3646507391","state":["exists","up"]},{"osd":7,"uuid":"29b40029-6843-47e4-b83e-af6cefd3e500","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":51,"up_thru":60,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6812","nonce":5515467}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6813","nonce":5515467}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6815","nonce":5515467}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.111:6814","nonce":5515467}]},"public_addr":"192.168.123.111:6812/5515467","cluster_addr":"192.168.123.111:6813/5515467","heartbeat_back_addr":"192.168.123.111:6815/5515467","heartbeat_front_addr":"192.168.123.111:6814/5515467","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:58:36.480874+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:59:10.436552+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T22:59:44.575086+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:00:19.486922+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:00:54.034338+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:01:29.981522+0000","dead_epoch":0},{"osd":6,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:02:03.219993+0000","dead_epoch":0},{"osd":7,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-08T23:02:39.212275+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.106:0/3526356403":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/2523815248":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/1915233046":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/1313816001":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/2787078610":"2026-03-09T22:57:01.174562+0000","192.168.123.106:6800/1101559289":"2026-03-09T22:57:01.174562+0000","192.168.123.106:0/1991674123":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/1740051211":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/410680846":"2026-03-09T22:57:11.437532+0000","192.168.123.106:6800/1580927884":"2026-03-09T22:57:11.437532+0000","192.168.123.106:0/472491601":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/1125702742":"2026-03-09T23:03:37.586080+0000","192.168.123.106:0/100617998":"2026-03-09T23:03:37.586080+0000","192.168.123.106:6800/3890676051":"2026-03-09T23:03:37.586080+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-08T23:04:11.062 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.0 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.1 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.2 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.3 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.4 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.5 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.6 flush_pg_stats 2026-03-08T23:04:11.063 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph tell osd.7 flush_pg_stats 2026-03-08T23:04:11.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:11 vm06 bash[20625]: cluster 2026-03-08T23:04:09.629866+0000 mgr.y (mgr.24419) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:11 vm06 bash[20625]: cluster 2026-03-08T23:04:09.629866+0000 mgr.y (mgr.24419) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:11 vm06 bash[20625]: audit 2026-03-08T23:04:11.006715+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.106:0/3808251881' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:11 vm06 bash[20625]: audit 2026-03-08T23:04:11.006715+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.106:0/3808251881' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:11 vm06 bash[27746]: cluster 2026-03-08T23:04:09.629866+0000 mgr.y (mgr.24419) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:11 vm06 bash[27746]: cluster 2026-03-08T23:04:09.629866+0000 mgr.y (mgr.24419) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:11 vm06 bash[27746]: audit 2026-03-08T23:04:11.006715+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.106:0/3808251881' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:11.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:11 vm06 bash[27746]: audit 2026-03-08T23:04:11.006715+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.106:0/3808251881' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:11 vm11 bash[23232]: cluster 2026-03-08T23:04:09.629866+0000 mgr.y (mgr.24419) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:11 vm11 bash[23232]: cluster 2026-03-08T23:04:09.629866+0000 mgr.y (mgr.24419) 52 : cluster [DBG] pgmap v18: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:11 vm11 bash[23232]: audit 2026-03-08T23:04:11.006715+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.106:0/3808251881' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:11 vm11 bash[23232]: audit 2026-03-08T23:04:11.006715+0000 mon.a (mon.0) 791 : audit [DBG] from='client.? 192.168.123.106:0/3808251881' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-08T23:04:12.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: cluster 2026-03-08T23:04:11.630429+0000 mgr.y (mgr.24419) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: cluster 2026-03-08T23:04:11.630429+0000 mgr.y (mgr.24419) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.058636+0000 mgr.y (mgr.24419) 54 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.058636+0000 mgr.y (mgr.24419) 54 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.113331+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.113331+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.121235+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.121235+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.862321+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.862321+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.867493+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.867493+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.868508+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.868508+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.869048+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.869048+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.873262+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:13 vm06 bash[20625]: audit 2026-03-08T23:04:12.873262+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: cluster 2026-03-08T23:04:11.630429+0000 mgr.y (mgr.24419) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: cluster 2026-03-08T23:04:11.630429+0000 mgr.y (mgr.24419) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.058636+0000 mgr.y (mgr.24419) 54 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.058636+0000 mgr.y (mgr.24419) 54 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.113331+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.113331+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.121235+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.121235+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.862321+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.862321+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.867493+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.867493+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.868508+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.868508+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.869048+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.869048+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.873262+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:13 vm06 bash[27746]: audit 2026-03-08T23:04:12.873262+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: cluster 2026-03-08T23:04:11.630429+0000 mgr.y (mgr.24419) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: cluster 2026-03-08T23:04:11.630429+0000 mgr.y (mgr.24419) 53 : cluster [DBG] pgmap v19: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.058636+0000 mgr.y (mgr.24419) 54 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.058636+0000 mgr.y (mgr.24419) 54 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.113331+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.113331+0000 mon.a (mon.0) 792 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.121235+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.121235+0000 mon.a (mon.0) 793 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.862321+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.862321+0000 mon.a (mon.0) 794 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.867493+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.867493+0000 mon.a (mon.0) 795 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.868508+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.868508+0000 mon.c (mon.2) 54 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.869048+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.869048+0000 mon.c (mon.2) 55 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.873262+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:13 vm11 bash[23232]: audit 2026-03-08T23:04:12.873262+0000 mon.a (mon.0) 796 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:15 vm06 bash[20625]: cluster 2026-03-08T23:04:13.630726+0000 mgr.y (mgr.24419) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:15 vm06 bash[20625]: cluster 2026-03-08T23:04:13.630726+0000 mgr.y (mgr.24419) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:15 vm06 bash[27746]: cluster 2026-03-08T23:04:13.630726+0000 mgr.y (mgr.24419) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:15 vm06 bash[27746]: cluster 2026-03-08T23:04:13.630726+0000 mgr.y (mgr.24419) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:15.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:15 vm11 bash[23232]: cluster 2026-03-08T23:04:13.630726+0000 mgr.y (mgr.24419) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:15.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:15 vm11 bash[23232]: cluster 2026-03-08T23:04:13.630726+0000 mgr.y (mgr.24419) 55 : cluster [DBG] pgmap v20: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:15.768 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.769 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.771 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.771 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.774 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.774 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.775 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:15.778 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:16.440 INFO:teuthology.orchestra.run.vm06.stdout:111669149744 2026-03-08T23:04:16.440 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.3 2026-03-08T23:04:16.543 INFO:teuthology.orchestra.run.vm06.stdout:34359738436 2026-03-08T23:04:16.543 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.0 2026-03-08T23:04:16.590 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:04:16 vm06 bash[56369]: ts=2026-03-08T23:04:16.431Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002213056s 2026-03-08T23:04:16.658 INFO:teuthology.orchestra.run.vm06.stdout:77309411383 2026-03-08T23:04:16.658 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.2 2026-03-08T23:04:16.686 INFO:teuthology.orchestra.run.vm06.stdout:137438953514 2026-03-08T23:04:16.686 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.4 2026-03-08T23:04:16.695 INFO:teuthology.orchestra.run.vm06.stdout:188978561051 2026-03-08T23:04:16.695 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.6 2026-03-08T23:04:16.696 INFO:teuthology.orchestra.run.vm06.stdout:163208757282 2026-03-08T23:04:16.696 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.5 2026-03-08T23:04:16.704 INFO:teuthology.orchestra.run.vm06.stdout:55834574910 2026-03-08T23:04:16.704 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.1 2026-03-08T23:04:16.742 INFO:teuthology.orchestra.run.vm06.stdout:219043332116 2026-03-08T23:04:16.743 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph osd last-stat-seq osd.7 2026-03-08T23:04:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:17 vm06 bash[20625]: cluster 2026-03-08T23:04:15.631102+0000 mgr.y (mgr.24419) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:17 vm06 bash[20625]: cluster 2026-03-08T23:04:15.631102+0000 mgr.y (mgr.24419) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:17 vm06 bash[27746]: cluster 2026-03-08T23:04:15.631102+0000 mgr.y (mgr.24419) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:17 vm06 bash[27746]: cluster 2026-03-08T23:04:15.631102+0000 mgr.y (mgr.24419) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:17.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:17 vm11 bash[23232]: cluster 2026-03-08T23:04:15.631102+0000 mgr.y (mgr.24419) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:17.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:17 vm11 bash[23232]: cluster 2026-03-08T23:04:15.631102+0000 mgr.y (mgr.24419) 56 : cluster [DBG] pgmap v21: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:19 vm06 bash[20625]: cluster 2026-03-08T23:04:17.631580+0000 mgr.y (mgr.24419) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:19 vm06 bash[20625]: cluster 2026-03-08T23:04:17.631580+0000 mgr.y (mgr.24419) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:19 vm06 bash[27746]: cluster 2026-03-08T23:04:17.631580+0000 mgr.y (mgr.24419) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:19 vm06 bash[27746]: cluster 2026-03-08T23:04:17.631580+0000 mgr.y (mgr.24419) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:19.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:19 vm11 bash[23232]: cluster 2026-03-08T23:04:17.631580+0000 mgr.y (mgr.24419) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:19.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:19 vm11 bash[23232]: cluster 2026-03-08T23:04:17.631580+0000 mgr.y (mgr.24419) 57 : cluster [DBG] pgmap v22: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:20.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:20 vm06 bash[20625]: cluster 2026-03-08T23:04:19.631853+0000 mgr.y (mgr.24419) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:20.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:20 vm06 bash[20625]: cluster 2026-03-08T23:04:19.631853+0000 mgr.y (mgr.24419) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:20.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:20 vm06 bash[27746]: cluster 2026-03-08T23:04:19.631853+0000 mgr.y (mgr.24419) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:20.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:20 vm06 bash[27746]: cluster 2026-03-08T23:04:19.631853+0000 mgr.y (mgr.24419) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:20.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:20 vm11 bash[23232]: cluster 2026-03-08T23:04:19.631853+0000 mgr.y (mgr.24419) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:20.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:20 vm11 bash[23232]: cluster 2026-03-08T23:04:19.631853+0000 mgr.y (mgr.24419) 58 : cluster [DBG] pgmap v23: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1023 B/s rd, 0 op/s 2026-03-08T23:04:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:04:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:04:21.261 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.262 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.266 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.267 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.273 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.275 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.277 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:21.279 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:22.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:04:22.904 INFO:teuthology.orchestra.run.vm06.stdout:55834574911 2026-03-08T23:04:23.060 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574910 got 55834574911 for osd.1 2026-03-08T23:04:23.060 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.184 INFO:teuthology.orchestra.run.vm06.stdout:163208757283 2026-03-08T23:04:23.225 INFO:teuthology.orchestra.run.vm06.stdout:34359738437 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: cluster 2026-03-08T23:04:21.632280+0000 mgr.y (mgr.24419) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: cluster 2026-03-08T23:04:21.632280+0000 mgr.y (mgr.24419) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: audit 2026-03-08T23:04:22.066653+0000 mgr.y (mgr.24419) 60 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: audit 2026-03-08T23:04:22.066653+0000 mgr.y (mgr.24419) 60 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: audit 2026-03-08T23:04:22.740117+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: audit 2026-03-08T23:04:22.740117+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: audit 2026-03-08T23:04:22.902618+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.106:0/1204930397' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:22 vm06 bash[20625]: audit 2026-03-08T23:04:22.902618+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.106:0/1204930397' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: cluster 2026-03-08T23:04:21.632280+0000 mgr.y (mgr.24419) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: cluster 2026-03-08T23:04:21.632280+0000 mgr.y (mgr.24419) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: audit 2026-03-08T23:04:22.066653+0000 mgr.y (mgr.24419) 60 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: audit 2026-03-08T23:04:22.066653+0000 mgr.y (mgr.24419) 60 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: audit 2026-03-08T23:04:22.740117+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: audit 2026-03-08T23:04:22.740117+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: audit 2026-03-08T23:04:22.902618+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.106:0/1204930397' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:04:23.233 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:22 vm06 bash[27746]: audit 2026-03-08T23:04:22.902618+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.106:0/1204930397' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:04:23.261 INFO:teuthology.orchestra.run.vm06.stdout:188978561052 2026-03-08T23:04:23.303 INFO:teuthology.orchestra.run.vm06.stdout:137438953515 2026-03-08T23:04:23.303 INFO:teuthology.orchestra.run.vm06.stdout:111669149745 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: cluster 2026-03-08T23:04:21.632280+0000 mgr.y (mgr.24419) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: cluster 2026-03-08T23:04:21.632280+0000 mgr.y (mgr.24419) 59 : cluster [DBG] pgmap v24: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: audit 2026-03-08T23:04:22.066653+0000 mgr.y (mgr.24419) 60 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: audit 2026-03-08T23:04:22.066653+0000 mgr.y (mgr.24419) 60 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: audit 2026-03-08T23:04:22.740117+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: audit 2026-03-08T23:04:22.740117+0000 mon.c (mon.2) 56 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: audit 2026-03-08T23:04:22.902618+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.106:0/1204930397' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:04:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:22 vm11 bash[23232]: audit 2026-03-08T23:04:22.902618+0000 mon.c (mon.2) 57 : audit [DBG] from='client.? 192.168.123.106:0/1204930397' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-08T23:04:23.328 INFO:teuthology.orchestra.run.vm06.stdout:77309411384 2026-03-08T23:04:23.352 INFO:teuthology.orchestra.run.vm06.stdout:219043332117 2026-03-08T23:04:23.386 INFO:tasks.cephadm.ceph_manager.ceph:need seq 163208757282 got 163208757283 for osd.5 2026-03-08T23:04:23.386 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.441 INFO:tasks.cephadm.ceph_manager.ceph:need seq 188978561051 got 188978561052 for osd.6 2026-03-08T23:04:23.441 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.837 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738436 got 34359738437 for osd.0 2026-03-08T23:04:23.837 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.851 INFO:tasks.cephadm.ceph_manager.ceph:need seq 219043332116 got 219043332117 for osd.7 2026-03-08T23:04:23.851 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.871 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411383 got 77309411384 for osd.2 2026-03-08T23:04:23.871 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.873 INFO:tasks.cephadm.ceph_manager.ceph:need seq 137438953514 got 137438953515 for osd.4 2026-03-08T23:04:23.873 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.875 INFO:tasks.cephadm.ceph_manager.ceph:need seq 111669149744 got 111669149745 for osd.3 2026-03-08T23:04:23.875 DEBUG:teuthology.parallel:result is None 2026-03-08T23:04:23.875 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-08T23:04:23.875 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph pg dump --format=json 2026-03-08T23:04:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.177069+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.106:0/2008933836' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.177069+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.106:0/2008933836' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.225552+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.106:0/3454463213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.225552+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.106:0/3454463213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.261714+0000 mon.c (mon.2) 59 : audit [DBG] from='client.? 192.168.123.106:0/547722311' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.261714+0000 mon.c (mon.2) 59 : audit [DBG] from='client.? 192.168.123.106:0/547722311' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.293031+0000 mon.c (mon.2) 60 : audit [DBG] from='client.? 192.168.123.106:0/110065167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.293031+0000 mon.c (mon.2) 60 : audit [DBG] from='client.? 192.168.123.106:0/110065167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.301162+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.106:0/2694440586' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.301162+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.106:0/2694440586' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.325589+0000 mon.b (mon.1) 36 : audit [DBG] from='client.? 192.168.123.106:0/2393098563' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.325589+0000 mon.b (mon.1) 36 : audit [DBG] from='client.? 192.168.123.106:0/2393098563' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.350781+0000 mon.c (mon.2) 61 : audit [DBG] from='client.? 192.168.123.106:0/1205071230' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:23 vm06 bash[27746]: audit 2026-03-08T23:04:23.350781+0000 mon.c (mon.2) 61 : audit [DBG] from='client.? 192.168.123.106:0/1205071230' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.177069+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.106:0/2008933836' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.177069+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.106:0/2008933836' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.225552+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.106:0/3454463213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.225552+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.106:0/3454463213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.261714+0000 mon.c (mon.2) 59 : audit [DBG] from='client.? 192.168.123.106:0/547722311' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.261714+0000 mon.c (mon.2) 59 : audit [DBG] from='client.? 192.168.123.106:0/547722311' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.293031+0000 mon.c (mon.2) 60 : audit [DBG] from='client.? 192.168.123.106:0/110065167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.293031+0000 mon.c (mon.2) 60 : audit [DBG] from='client.? 192.168.123.106:0/110065167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.301162+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.106:0/2694440586' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.301162+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.106:0/2694440586' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.325589+0000 mon.b (mon.1) 36 : audit [DBG] from='client.? 192.168.123.106:0/2393098563' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.325589+0000 mon.b (mon.1) 36 : audit [DBG] from='client.? 192.168.123.106:0/2393098563' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.350781+0000 mon.c (mon.2) 61 : audit [DBG] from='client.? 192.168.123.106:0/1205071230' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:04:24.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:23 vm06 bash[20625]: audit 2026-03-08T23:04:23.350781+0000 mon.c (mon.2) 61 : audit [DBG] from='client.? 192.168.123.106:0/1205071230' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.177069+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.106:0/2008933836' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.177069+0000 mon.b (mon.1) 35 : audit [DBG] from='client.? 192.168.123.106:0/2008933836' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 5}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.225552+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.106:0/3454463213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.225552+0000 mon.c (mon.2) 58 : audit [DBG] from='client.? 192.168.123.106:0/3454463213' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.261714+0000 mon.c (mon.2) 59 : audit [DBG] from='client.? 192.168.123.106:0/547722311' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.261714+0000 mon.c (mon.2) 59 : audit [DBG] from='client.? 192.168.123.106:0/547722311' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 6}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.293031+0000 mon.c (mon.2) 60 : audit [DBG] from='client.? 192.168.123.106:0/110065167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.293031+0000 mon.c (mon.2) 60 : audit [DBG] from='client.? 192.168.123.106:0/110065167' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.301162+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.106:0/2694440586' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.301162+0000 mon.a (mon.0) 797 : audit [DBG] from='client.? 192.168.123.106:0/2694440586' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 4}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.325589+0000 mon.b (mon.1) 36 : audit [DBG] from='client.? 192.168.123.106:0/2393098563' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.325589+0000 mon.b (mon.1) 36 : audit [DBG] from='client.? 192.168.123.106:0/2393098563' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.350781+0000 mon.c (mon.2) 61 : audit [DBG] from='client.? 192.168.123.106:0/1205071230' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:04:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:23 vm11 bash[23232]: audit 2026-03-08T23:04:23.350781+0000 mon.c (mon.2) 61 : audit [DBG] from='client.? 192.168.123.106:0/1205071230' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 7}]: dispatch 2026-03-08T23:04:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:24 vm06 bash[20625]: cluster 2026-03-08T23:04:23.632601+0000 mgr.y (mgr.24419) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:24 vm06 bash[20625]: cluster 2026-03-08T23:04:23.632601+0000 mgr.y (mgr.24419) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:24 vm06 bash[27746]: cluster 2026-03-08T23:04:23.632601+0000 mgr.y (mgr.24419) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:24 vm06 bash[27746]: cluster 2026-03-08T23:04:23.632601+0000 mgr.y (mgr.24419) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:24 vm11 bash[23232]: cluster 2026-03-08T23:04:23.632601+0000 mgr.y (mgr.24419) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:25.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:24 vm11 bash[23232]: cluster 2026-03-08T23:04:23.632601+0000 mgr.y (mgr.24419) 61 : cluster [DBG] pgmap v25: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:27.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:27 vm11 bash[23232]: cluster 2026-03-08T23:04:25.633079+0000 mgr.y (mgr.24419) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:27.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:27 vm11 bash[23232]: cluster 2026-03-08T23:04:25.633079+0000 mgr.y (mgr.24419) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:27.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:27 vm06 bash[20625]: cluster 2026-03-08T23:04:25.633079+0000 mgr.y (mgr.24419) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:27.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:27 vm06 bash[20625]: cluster 2026-03-08T23:04:25.633079+0000 mgr.y (mgr.24419) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:27.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:27 vm06 bash[27746]: cluster 2026-03-08T23:04:25.633079+0000 mgr.y (mgr.24419) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:27.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:27 vm06 bash[27746]: cluster 2026-03-08T23:04:25.633079+0000 mgr.y (mgr.24419) 62 : cluster [DBG] pgmap v26: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:28.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:28 vm11 bash[23232]: cluster 2026-03-08T23:04:27.633350+0000 mgr.y (mgr.24419) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:28.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:28 vm11 bash[23232]: cluster 2026-03-08T23:04:27.633350+0000 mgr.y (mgr.24419) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:28.572 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:28.586 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:28 vm06 bash[27746]: cluster 2026-03-08T23:04:27.633350+0000 mgr.y (mgr.24419) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:28.586 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:28 vm06 bash[27746]: cluster 2026-03-08T23:04:27.633350+0000 mgr.y (mgr.24419) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:28.586 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:28 vm06 bash[20625]: cluster 2026-03-08T23:04:27.633350+0000 mgr.y (mgr.24419) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:28.586 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:28 vm06 bash[20625]: cluster 2026-03-08T23:04:27.633350+0000 mgr.y (mgr.24419) 63 : cluster [DBG] pgmap v27: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:28.809 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:04:28.811 INFO:teuthology.orchestra.run.vm06.stderr:dumped all 2026-03-08T23:04:28.871 INFO:teuthology.orchestra.run.vm06.stdout:{"pg_ready":true,"pg_map":{"version":27,"stamp":"2026-03-08T23:04:27.633225+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":912,"num_read_kb":771,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221280,"kb_used_data":6588,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518112,"statfs":{"total":171765137408,"available":171538546688,"internally_reserved":0,"allocated":6746112,"data_stored":3405265,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12710,"internal_metadata":219663962},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002388"},"pg_stats":[{"pgid":"6.1b","version":"62'1","reported_seq":22,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706258+0000","last_change":"2026-03-08T23:03:11.572368+0000","last_active":"2026-03-08T23:03:37.706258+0000","last_peered":"2026-03-08T23:03:37.706258+0000","last_clean":"2026-03-08T23:03:37.706258+0000","last_became_active":"2026-03-08T23:03:11.571527+0000","last_became_peered":"2026-03-08T23:03:11.571527+0000","last_unstale":"2026-03-08T23:03:37.706258+0000","last_undegraded":"2026-03-08T23:03:37.706258+0000","last_fullsized":"2026-03-08T23:03:37.706258+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:47:09.213034+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603107+0000","last_change":"2026-03-08T23:03:05.265445+0000","last_active":"2026-03-08T23:03:37.603107+0000","last_peered":"2026-03-08T23:03:37.603107+0000","last_clean":"2026-03-08T23:03:37.603107+0000","last_became_active":"2026-03-08T23:03:05.265160+0000","last_became_peered":"2026-03-08T23:03:05.265160+0000","last_unstale":"2026-03-08T23:03:37.603107+0000","last_undegraded":"2026-03-08T23:03:37.603107+0000","last_fullsized":"2026-03-08T23:03:37.603107+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:09:55.505460+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706402+0000","last_change":"2026-03-08T23:03:07.144309+0000","last_active":"2026-03-08T23:03:37.706402+0000","last_peered":"2026-03-08T23:03:37.706402+0000","last_clean":"2026-03-08T23:03:37.706402+0000","last_became_active":"2026-03-08T23:03:07.144171+0000","last_became_peered":"2026-03-08T23:03:07.144171+0000","last_unstale":"2026-03-08T23:03:37.706402+0000","last_undegraded":"2026-03-08T23:03:37.706402+0000","last_fullsized":"2026-03-08T23:03:37.706402+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:42:56.896174+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598958+0000","last_change":"2026-03-08T23:03:09.260951+0000","last_active":"2026-03-08T23:03:37.598958+0000","last_peered":"2026-03-08T23:03:37.598958+0000","last_clean":"2026-03-08T23:03:37.598958+0000","last_became_active":"2026-03-08T23:03:09.260581+0000","last_became_peered":"2026-03-08T23:03:09.260581+0000","last_unstale":"2026-03-08T23:03:37.598958+0000","last_undegraded":"2026-03-08T23:03:37.598958+0000","last_fullsized":"2026-03-08T23:03:37.598958+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:56:20.389361+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705919+0000","last_change":"2026-03-08T23:03:05.302876+0000","last_active":"2026-03-08T23:03:37.705919+0000","last_peered":"2026-03-08T23:03:37.705919+0000","last_clean":"2026-03-08T23:03:37.705919+0000","last_became_active":"2026-03-08T23:03:05.302712+0000","last_became_peered":"2026-03-08T23:03:05.302712+0000","last_unstale":"2026-03-08T23:03:37.705919+0000","last_undegraded":"2026-03-08T23:03:37.705919+0000","last_fullsized":"2026-03-08T23:03:37.705919+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:15:51.288124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603543+0000","last_change":"2026-03-08T23:03:07.143004+0000","last_active":"2026-03-08T23:03:37.603543+0000","last_peered":"2026-03-08T23:03:37.603543+0000","last_clean":"2026-03-08T23:03:37.603543+0000","last_became_active":"2026-03-08T23:03:07.142907+0000","last_became_peered":"2026-03-08T23:03:07.142907+0000","last_unstale":"2026-03-08T23:03:37.603543+0000","last_undegraded":"2026-03-08T23:03:37.603543+0000","last_fullsized":"2026-03-08T23:03:37.603543+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:41:59.389599+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703501+0000","last_change":"2026-03-08T23:03:09.273772+0000","last_active":"2026-03-08T23:03:37.703501+0000","last_peered":"2026-03-08T23:03:37.703501+0000","last_clean":"2026-03-08T23:03:37.703501+0000","last_became_active":"2026-03-08T23:03:09.272256+0000","last_became_peered":"2026-03-08T23:03:09.272256+0000","last_unstale":"2026-03-08T23:03:37.703501+0000","last_undegraded":"2026-03-08T23:03:37.703501+0000","last_fullsized":"2026-03-08T23:03:37.703501+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:09:39.991343+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598200+0000","last_change":"2026-03-08T23:03:11.309877+0000","last_active":"2026-03-08T23:03:37.598200+0000","last_peered":"2026-03-08T23:03:37.598200+0000","last_clean":"2026-03-08T23:03:37.598200+0000","last_became_active":"2026-03-08T23:03:11.309035+0000","last_became_peered":"2026-03-08T23:03:11.309035+0000","last_unstale":"2026-03-08T23:03:37.598200+0000","last_undegraded":"2026-03-08T23:03:37.598200+0000","last_fullsized":"2026-03-08T23:03:37.598200+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:44:22.742378+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704268+0000","last_change":"2026-03-08T23:03:05.352635+0000","last_active":"2026-03-08T23:03:37.704268+0000","last_peered":"2026-03-08T23:03:37.704268+0000","last_clean":"2026-03-08T23:03:37.704268+0000","last_became_active":"2026-03-08T23:03:05.352185+0000","last_became_peered":"2026-03-08T23:03:05.352185+0000","last_unstale":"2026-03-08T23:03:37.704268+0000","last_undegraded":"2026-03-08T23:03:37.704268+0000","last_fullsized":"2026-03-08T23:03:37.704268+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:59:43.776557+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706739+0000","last_change":"2026-03-08T23:03:07.159152+0000","last_active":"2026-03-08T23:03:37.706739+0000","last_peered":"2026-03-08T23:03:37.706739+0000","last_clean":"2026-03-08T23:03:37.706739+0000","last_became_active":"2026-03-08T23:03:07.159019+0000","last_became_peered":"2026-03-08T23:03:07.159019+0000","last_unstale":"2026-03-08T23:03:37.706739+0000","last_undegraded":"2026-03-08T23:03:37.706739+0000","last_fullsized":"2026-03-08T23:03:37.706739+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:27:11.074114+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704296+0000","last_change":"2026-03-08T23:03:09.272176+0000","last_active":"2026-03-08T23:03:37.704296+0000","last_peered":"2026-03-08T23:03:37.704296+0000","last_clean":"2026-03-08T23:03:37.704296+0000","last_became_active":"2026-03-08T23:03:09.272051+0000","last_became_peered":"2026-03-08T23:03:09.272051+0000","last_unstale":"2026-03-08T23:03:37.704296+0000","last_undegraded":"2026-03-08T23:03:37.704296+0000","last_fullsized":"2026-03-08T23:03:37.704296+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:36:40.231124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706705+0000","last_change":"2026-03-08T23:03:11.330324+0000","last_active":"2026-03-08T23:03:37.706705+0000","last_peered":"2026-03-08T23:03:37.706705+0000","last_clean":"2026-03-08T23:03:37.706705+0000","last_became_active":"2026-03-08T23:03:11.330132+0000","last_became_peered":"2026-03-08T23:03:11.330132+0000","last_unstale":"2026-03-08T23:03:37.706705+0000","last_undegraded":"2026-03-08T23:03:37.706705+0000","last_fullsized":"2026-03-08T23:03:37.706705+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:45:48.893007+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704489+0000","last_change":"2026-03-08T23:03:05.306509+0000","last_active":"2026-03-08T23:03:37.704489+0000","last_peered":"2026-03-08T23:03:37.704489+0000","last_clean":"2026-03-08T23:03:37.704489+0000","last_became_active":"2026-03-08T23:03:05.306317+0000","last_became_peered":"2026-03-08T23:03:05.306317+0000","last_unstale":"2026-03-08T23:03:37.704489+0000","last_undegraded":"2026-03-08T23:03:37.704489+0000","last_fullsized":"2026-03-08T23:03:37.704489+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:01:08.158439+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706045+0000","last_change":"2026-03-08T23:03:07.149272+0000","last_active":"2026-03-08T23:03:37.706045+0000","last_peered":"2026-03-08T23:03:37.706045+0000","last_clean":"2026-03-08T23:03:37.706045+0000","last_became_active":"2026-03-08T23:03:07.149170+0000","last_became_peered":"2026-03-08T23:03:07.149170+0000","last_unstale":"2026-03-08T23:03:37.706045+0000","last_undegraded":"2026-03-08T23:03:37.706045+0000","last_fullsized":"2026-03-08T23:03:37.706045+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:59:14.814510+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706059+0000","last_change":"2026-03-08T23:03:09.270502+0000","last_active":"2026-03-08T23:03:37.706059+0000","last_peered":"2026-03-08T23:03:37.706059+0000","last_clean":"2026-03-08T23:03:37.706059+0000","last_became_active":"2026-03-08T23:03:09.270267+0000","last_became_peered":"2026-03-08T23:03:09.270267+0000","last_unstale":"2026-03-08T23:03:37.706059+0000","last_undegraded":"2026-03-08T23:03:37.706059+0000","last_fullsized":"2026-03-08T23:03:37.706059+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:18:55.781881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602400+0000","last_change":"2026-03-08T23:03:11.306586+0000","last_active":"2026-03-08T23:03:37.602400+0000","last_peered":"2026-03-08T23:03:37.602400+0000","last_clean":"2026-03-08T23:03:37.602400+0000","last_became_active":"2026-03-08T23:03:11.305881+0000","last_became_peered":"2026-03-08T23:03:11.305881+0000","last_unstale":"2026-03-08T23:03:37.602400+0000","last_undegraded":"2026-03-08T23:03:37.602400+0000","last_fullsized":"2026-03-08T23:03:37.602400+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:05:49.685712+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"63'19","reported_seq":60,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703419+0000","last_change":"2026-03-08T23:03:07.147417+0000","last_active":"2026-03-08T23:03:37.703419+0000","last_peered":"2026-03-08T23:03:37.703419+0000","last_clean":"2026-03-08T23:03:37.703419+0000","last_became_active":"2026-03-08T23:03:07.146002+0000","last_became_peered":"2026-03-08T23:03:07.146002+0000","last_unstale":"2026-03-08T23:03:37.703419+0000","last_undegraded":"2026-03-08T23:03:37.703419+0000","last_fullsized":"2026-03-08T23:03:37.703419+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:08:52.792912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704386+0000","last_change":"2026-03-08T23:03:05.306556+0000","last_active":"2026-03-08T23:03:37.704386+0000","last_peered":"2026-03-08T23:03:37.704386+0000","last_clean":"2026-03-08T23:03:37.704386+0000","last_became_active":"2026-03-08T23:03:05.306189+0000","last_became_peered":"2026-03-08T23:03:05.306189+0000","last_unstale":"2026-03-08T23:03:37.704386+0000","last_undegraded":"2026-03-08T23:03:37.704386+0000","last_fullsized":"2026-03-08T23:03:37.704386+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:18:44.920517+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704512+0000","last_change":"2026-03-08T23:03:09.259574+0000","last_active":"2026-03-08T23:03:37.704512+0000","last_peered":"2026-03-08T23:03:37.704512+0000","last_clean":"2026-03-08T23:03:37.704512+0000","last_became_active":"2026-03-08T23:03:09.259361+0000","last_became_peered":"2026-03-08T23:03:09.259361+0000","last_unstale":"2026-03-08T23:03:37.704512+0000","last_undegraded":"2026-03-08T23:03:37.704512+0000","last_fullsized":"2026-03-08T23:03:37.704512+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:33:22.882858+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604954+0000","last_change":"2026-03-08T23:03:11.323886+0000","last_active":"2026-03-08T23:03:37.604954+0000","last_peered":"2026-03-08T23:03:37.604954+0000","last_clean":"2026-03-08T23:03:37.604954+0000","last_became_active":"2026-03-08T23:03:11.323772+0000","last_became_peered":"2026-03-08T23:03:11.323772+0000","last_unstale":"2026-03-08T23:03:37.604954+0000","last_undegraded":"2026-03-08T23:03:37.604954+0000","last_fullsized":"2026-03-08T23:03:37.604954+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:05:48.283847+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705564+0000","last_change":"2026-03-08T23:03:07.147742+0000","last_active":"2026-03-08T23:03:37.705564+0000","last_peered":"2026-03-08T23:03:37.705564+0000","last_clean":"2026-03-08T23:03:37.705564+0000","last_became_active":"2026-03-08T23:03:07.145871+0000","last_became_peered":"2026-03-08T23:03:07.145871+0000","last_unstale":"2026-03-08T23:03:37.705564+0000","last_undegraded":"2026-03-08T23:03:37.705564+0000","last_fullsized":"2026-03-08T23:03:37.705564+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:38:50.191279+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703833+0000","last_change":"2026-03-08T23:03:05.258141+0000","last_active":"2026-03-08T23:03:37.703833+0000","last_peered":"2026-03-08T23:03:37.703833+0000","last_clean":"2026-03-08T23:03:37.703833+0000","last_became_active":"2026-03-08T23:03:05.257993+0000","last_became_peered":"2026-03-08T23:03:05.257993+0000","last_unstale":"2026-03-08T23:03:37.703833+0000","last_undegraded":"2026-03-08T23:03:37.703833+0000","last_fullsized":"2026-03-08T23:03:37.703833+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:22:36.458095+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.397560+0000","last_change":"2026-03-08T23:03:09.269523+0000","last_active":"2026-03-08T23:04:15.397560+0000","last_peered":"2026-03-08T23:04:15.397560+0000","last_clean":"2026-03-08T23:04:15.397560+0000","last_became_active":"2026-03-08T23:03:09.269433+0000","last_became_peered":"2026-03-08T23:03:09.269433+0000","last_unstale":"2026-03-08T23:04:15.397560+0000","last_undegraded":"2026-03-08T23:04:15.397560+0000","last_fullsized":"2026-03-08T23:04:15.397560+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:55:52.374613+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599489+0000","last_change":"2026-03-08T23:03:11.301601+0000","last_active":"2026-03-08T23:03:37.599489+0000","last_peered":"2026-03-08T23:03:37.599489+0000","last_clean":"2026-03-08T23:03:37.599489+0000","last_became_active":"2026-03-08T23:03:11.301037+0000","last_became_peered":"2026-03-08T23:03:11.301037+0000","last_unstale":"2026-03-08T23:03:37.599489+0000","last_undegraded":"2026-03-08T23:03:37.599489+0000","last_fullsized":"2026-03-08T23:03:37.599489+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:10:40.964320+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705622+0000","last_change":"2026-03-08T23:03:07.151750+0000","last_active":"2026-03-08T23:03:37.705622+0000","last_peered":"2026-03-08T23:03:37.705622+0000","last_clean":"2026-03-08T23:03:37.705622+0000","last_became_active":"2026-03-08T23:03:07.151662+0000","last_became_peered":"2026-03-08T23:03:07.151662+0000","last_unstale":"2026-03-08T23:03:37.705622+0000","last_undegraded":"2026-03-08T23:03:37.705622+0000","last_fullsized":"2026-03-08T23:03:37.705622+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:46:06.961838+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703884+0000","last_change":"2026-03-08T23:03:05.258074+0000","last_active":"2026-03-08T23:03:37.703884+0000","last_peered":"2026-03-08T23:03:37.703884+0000","last_clean":"2026-03-08T23:03:37.703884+0000","last_became_active":"2026-03-08T23:03:05.257838+0000","last_became_peered":"2026-03-08T23:03:05.257838+0000","last_unstale":"2026-03-08T23:03:37.703884+0000","last_undegraded":"2026-03-08T23:03:37.703884+0000","last_fullsized":"2026-03-08T23:03:37.703884+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:56:48.299746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"63'11","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.396219+0000","last_change":"2026-03-08T23:03:09.259469+0000","last_active":"2026-03-08T23:04:15.396219+0000","last_peered":"2026-03-08T23:04:15.396219+0000","last_clean":"2026-03-08T23:04:15.396219+0000","last_became_active":"2026-03-08T23:03:09.259374+0000","last_became_peered":"2026-03-08T23:03:09.259374+0000","last_unstale":"2026-03-08T23:04:15.396219+0000","last_undegraded":"2026-03-08T23:04:15.396219+0000","last_fullsized":"2026-03-08T23:04:15.396219+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:11:44.683631+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705160+0000","last_change":"2026-03-08T23:03:11.313287+0000","last_active":"2026-03-08T23:03:37.705160+0000","last_peered":"2026-03-08T23:03:37.705160+0000","last_clean":"2026-03-08T23:03:37.705160+0000","last_became_active":"2026-03-08T23:03:11.313145+0000","last_became_peered":"2026-03-08T23:03:11.313145+0000","last_unstale":"2026-03-08T23:03:37.705160+0000","last_undegraded":"2026-03-08T23:03:37.705160+0000","last_fullsized":"2026-03-08T23:03:37.705160+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:33:51.990289+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598384+0000","last_change":"2026-03-08T23:03:07.157761+0000","last_active":"2026-03-08T23:03:37.598384+0000","last_peered":"2026-03-08T23:03:37.598384+0000","last_clean":"2026-03-08T23:03:37.598384+0000","last_became_active":"2026-03-08T23:03:07.157647+0000","last_became_peered":"2026-03-08T23:03:07.157647+0000","last_unstale":"2026-03-08T23:03:37.598384+0000","last_undegraded":"2026-03-08T23:03:37.598384+0000","last_fullsized":"2026-03-08T23:03:37.598384+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:07:03.975955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704340+0000","last_change":"2026-03-08T23:03:05.351769+0000","last_active":"2026-03-08T23:03:37.704340+0000","last_peered":"2026-03-08T23:03:37.704340+0000","last_clean":"2026-03-08T23:03:37.704340+0000","last_became_active":"2026-03-08T23:03:05.351636+0000","last_became_peered":"2026-03-08T23:03:05.351636+0000","last_unstale":"2026-03-08T23:03:37.704340+0000","last_undegraded":"2026-03-08T23:03:37.704340+0000","last_fullsized":"2026-03-08T23:03:37.704340+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:34:10.951093+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.707037+0000","last_change":"2026-03-08T23:03:09.260254+0000","last_active":"2026-03-08T23:03:37.707037+0000","last_peered":"2026-03-08T23:03:37.707037+0000","last_clean":"2026-03-08T23:03:37.707037+0000","last_became_active":"2026-03-08T23:03:09.260175+0000","last_became_peered":"2026-03-08T23:03:09.260175+0000","last_unstale":"2026-03-08T23:03:37.707037+0000","last_undegraded":"2026-03-08T23:03:37.707037+0000","last_fullsized":"2026-03-08T23:03:37.707037+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:32:30.300842+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705421+0000","last_change":"2026-03-08T23:03:11.571776+0000","last_active":"2026-03-08T23:03:37.705421+0000","last_peered":"2026-03-08T23:03:37.705421+0000","last_clean":"2026-03-08T23:03:37.705421+0000","last_became_active":"2026-03-08T23:03:11.570841+0000","last_became_peered":"2026-03-08T23:03:11.570841+0000","last_unstale":"2026-03-08T23:03:37.705421+0000","last_undegraded":"2026-03-08T23:03:37.705421+0000","last_fullsized":"2026-03-08T23:03:37.705421+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:09:44.411491+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"62'12","reported_seq":47,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605162+0000","last_change":"2026-03-08T23:03:07.149537+0000","last_active":"2026-03-08T23:03:37.605162+0000","last_peered":"2026-03-08T23:03:37.605162+0000","last_clean":"2026-03-08T23:03:37.605162+0000","last_became_active":"2026-03-08T23:03:07.149449+0000","last_became_peered":"2026-03-08T23:03:07.149449+0000","last_unstale":"2026-03-08T23:03:37.605162+0000","last_undegraded":"2026-03-08T23:03:37.605162+0000","last_fullsized":"2026-03-08T23:03:37.605162+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:23:50.732022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703679+0000","last_change":"2026-03-08T23:03:05.266738+0000","last_active":"2026-03-08T23:03:37.703679+0000","last_peered":"2026-03-08T23:03:37.703679+0000","last_clean":"2026-03-08T23:03:37.703679+0000","last_became_active":"2026-03-08T23:03:05.266518+0000","last_became_peered":"2026-03-08T23:03:05.266518+0000","last_unstale":"2026-03-08T23:03:37.703679+0000","last_undegraded":"2026-03-08T23:03:37.703679+0000","last_fullsized":"2026-03-08T23:03:37.703679+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:49:08.039597+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"62'1","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599460+0000","last_change":"2026-03-08T23:03:14.653787+0000","last_active":"2026-03-08T23:03:37.599460+0000","last_peered":"2026-03-08T23:03:37.599460+0000","last_clean":"2026-03-08T23:03:37.599460+0000","last_became_active":"2026-03-08T23:03:08.139918+0000","last_became_peered":"2026-03-08T23:03:08.139918+0000","last_unstale":"2026-03-08T23:03:37.599460+0000","last_undegraded":"2026-03-08T23:03:37.599460+0000","last_fullsized":"2026-03-08T23:03:37.599460+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_clean_scrub_stamp":"2026-03-08T23:03:07.117572+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:10:50.001503+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00031847599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.397186+0000","last_change":"2026-03-08T23:03:09.269944+0000","last_active":"2026-03-08T23:04:15.397186+0000","last_peered":"2026-03-08T23:04:15.397186+0000","last_clean":"2026-03-08T23:04:15.397186+0000","last_became_active":"2026-03-08T23:03:09.269840+0000","last_became_peered":"2026-03-08T23:03:09.269840+0000","last_unstale":"2026-03-08T23:04:15.397186+0000","last_undegraded":"2026-03-08T23:04:15.397186+0000","last_fullsized":"2026-03-08T23:04:15.397186+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:09:18.229864+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703784+0000","last_change":"2026-03-08T23:03:11.572888+0000","last_active":"2026-03-08T23:03:37.703784+0000","last_peered":"2026-03-08T23:03:37.703784+0000","last_clean":"2026-03-08T23:03:37.703784+0000","last_became_active":"2026-03-08T23:03:11.572686+0000","last_became_peered":"2026-03-08T23:03:11.572686+0000","last_unstale":"2026-03-08T23:03:37.703784+0000","last_undegraded":"2026-03-08T23:03:37.703784+0000","last_fullsized":"2026-03-08T23:03:37.703784+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:29:36.565705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"62'13","reported_seq":56,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705697+0000","last_change":"2026-03-08T23:03:07.151778+0000","last_active":"2026-03-08T23:03:37.705697+0000","last_peered":"2026-03-08T23:03:37.705697+0000","last_clean":"2026-03-08T23:03:37.705697+0000","last_became_active":"2026-03-08T23:03:07.151698+0000","last_became_peered":"2026-03-08T23:03:07.151698+0000","last_unstale":"2026-03-08T23:03:37.705697+0000","last_undegraded":"2026-03-08T23:03:37.705697+0000","last_fullsized":"2026-03-08T23:03:37.705697+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:40:24.616411+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703940+0000","last_change":"2026-03-08T23:03:05.259562+0000","last_active":"2026-03-08T23:03:37.703940+0000","last_peered":"2026-03-08T23:03:37.703940+0000","last_clean":"2026-03-08T23:03:37.703940+0000","last_became_active":"2026-03-08T23:03:05.259457+0000","last_became_peered":"2026-03-08T23:03:05.259457+0000","last_unstale":"2026-03-08T23:03:37.703940+0000","last_undegraded":"2026-03-08T23:03:37.703940+0000","last_fullsized":"2026-03-08T23:03:37.703940+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:14:47.267473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"65'5","reported_seq":105,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:23.141412+0000","last_change":"2026-03-08T23:03:14.975893+0000","last_active":"2026-03-08T23:04:23.141412+0000","last_peered":"2026-03-08T23:04:23.141412+0000","last_clean":"2026-03-08T23:04:23.141412+0000","last_became_active":"2026-03-08T23:03:08.143743+0000","last_became_peered":"2026-03-08T23:03:08.143743+0000","last_unstale":"2026-03-08T23:04:23.141412+0000","last_undegraded":"2026-03-08T23:04:23.141412+0000","last_fullsized":"2026-03-08T23:04:23.141412+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_clean_scrub_stamp":"2026-03-08T23:03:07.117572+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:57:49.218220+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00076602300000000001,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":68,"num_read_kb":63,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598839+0000","last_change":"2026-03-08T23:03:09.271372+0000","last_active":"2026-03-08T23:03:37.598839+0000","last_peered":"2026-03-08T23:03:37.598839+0000","last_clean":"2026-03-08T23:03:37.598839+0000","last_became_active":"2026-03-08T23:03:09.271293+0000","last_became_peered":"2026-03-08T23:03:09.271293+0000","last_unstale":"2026-03-08T23:03:37.598839+0000","last_undegraded":"2026-03-08T23:03:37.598839+0000","last_fullsized":"2026-03-08T23:03:37.598839+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:52:56.466257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598760+0000","last_change":"2026-03-08T23:03:11.322100+0000","last_active":"2026-03-08T23:03:37.598760+0000","last_peered":"2026-03-08T23:03:37.598760+0000","last_clean":"2026-03-08T23:03:37.598760+0000","last_became_active":"2026-03-08T23:03:11.321986+0000","last_became_peered":"2026-03-08T23:03:11.321986+0000","last_unstale":"2026-03-08T23:03:37.598760+0000","last_undegraded":"2026-03-08T23:03:37.598760+0000","last_fullsized":"2026-03-08T23:03:37.598760+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:01:44.960156+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"63'30","reported_seq":95,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.396970+0000","last_change":"2026-03-08T23:03:07.146444+0000","last_active":"2026-03-08T23:04:15.396970+0000","last_peered":"2026-03-08T23:04:15.396970+0000","last_clean":"2026-03-08T23:04:15.396970+0000","last_became_active":"2026-03-08T23:03:07.146311+0000","last_became_peered":"2026-03-08T23:03:07.146311+0000","last_unstale":"2026-03-08T23:04:15.396970+0000","last_undegraded":"2026-03-08T23:04:15.396970+0000","last_fullsized":"2026-03-08T23:04:15.396970+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:45:41.090724+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.707797+0000","last_change":"2026-03-08T23:03:05.352431+0000","last_active":"2026-03-08T23:03:37.707797+0000","last_peered":"2026-03-08T23:03:37.707797+0000","last_clean":"2026-03-08T23:03:37.707797+0000","last_became_active":"2026-03-08T23:03:05.352037+0000","last_became_peered":"2026-03-08T23:03:05.352037+0000","last_unstale":"2026-03-08T23:03:37.707797+0000","last_undegraded":"2026-03-08T23:03:37.707797+0000","last_fullsized":"2026-03-08T23:03:37.707797+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:28:44.219812+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703962+0000","last_change":"2026-03-08T23:03:09.266587+0000","last_active":"2026-03-08T23:03:37.703962+0000","last_peered":"2026-03-08T23:03:37.703962+0000","last_clean":"2026-03-08T23:03:37.703962+0000","last_became_active":"2026-03-08T23:03:09.266483+0000","last_became_peered":"2026-03-08T23:03:09.266483+0000","last_unstale":"2026-03-08T23:03:37.703962+0000","last_undegraded":"2026-03-08T23:03:37.703962+0000","last_fullsized":"2026-03-08T23:03:37.703962+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:23:31.451403+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703529+0000","last_change":"2026-03-08T23:03:11.569014+0000","last_active":"2026-03-08T23:03:37.703529+0000","last_peered":"2026-03-08T23:03:37.703529+0000","last_clean":"2026-03-08T23:03:37.703529+0000","last_became_active":"2026-03-08T23:03:11.568850+0000","last_became_peered":"2026-03-08T23:03:11.568850+0000","last_unstale":"2026-03-08T23:03:37.703529+0000","last_undegraded":"2026-03-08T23:03:37.703529+0000","last_fullsized":"2026-03-08T23:03:37.703529+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:52:40.696997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"62'16","reported_seq":67,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.396085+0000","last_change":"2026-03-08T23:03:07.146923+0000","last_active":"2026-03-08T23:04:15.396085+0000","last_peered":"2026-03-08T23:04:15.396085+0000","last_clean":"2026-03-08T23:04:15.396085+0000","last_became_active":"2026-03-08T23:03:07.143329+0000","last_became_peered":"2026-03-08T23:03:07.143329+0000","last_unstale":"2026-03-08T23:04:15.396085+0000","last_undegraded":"2026-03-08T23:04:15.396085+0000","last_fullsized":"2026-03-08T23:04:15.396085+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:19:27.131443+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704256+0000","last_change":"2026-03-08T23:03:05.272548+0000","last_active":"2026-03-08T23:03:37.704256+0000","last_peered":"2026-03-08T23:03:37.704256+0000","last_clean":"2026-03-08T23:03:37.704256+0000","last_became_active":"2026-03-08T23:03:05.272360+0000","last_became_peered":"2026-03-08T23:03:05.272360+0000","last_unstale":"2026-03-08T23:03:37.704256+0000","last_undegraded":"2026-03-08T23:03:37.704256+0000","last_fullsized":"2026-03-08T23:03:37.704256+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:34:11.504201+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"64'2","reported_seq":36,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704319+0000","last_change":"2026-03-08T23:03:14.661508+0000","last_active":"2026-03-08T23:03:37.704319+0000","last_peered":"2026-03-08T23:03:37.704319+0000","last_clean":"2026-03-08T23:03:37.704319+0000","last_became_active":"2026-03-08T23:03:08.142845+0000","last_became_peered":"2026-03-08T23:03:08.142845+0000","last_unstale":"2026-03-08T23:03:37.704319+0000","last_undegraded":"2026-03-08T23:03:37.704319+0000","last_fullsized":"2026-03-08T23:03:37.704319+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_clean_scrub_stamp":"2026-03-08T23:03:07.117572+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:34:53.911312+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001013176,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.396991+0000","last_change":"2026-03-08T23:03:09.262286+0000","last_active":"2026-03-08T23:04:15.396991+0000","last_peered":"2026-03-08T23:04:15.396991+0000","last_clean":"2026-03-08T23:04:15.396991+0000","last_became_active":"2026-03-08T23:03:09.262111+0000","last_became_peered":"2026-03-08T23:03:09.262111+0000","last_unstale":"2026-03-08T23:04:15.396991+0000","last_undegraded":"2026-03-08T23:04:15.396991+0000","last_fullsized":"2026-03-08T23:04:15.396991+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:05:50.392258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603277+0000","last_change":"2026-03-08T23:03:11.321709+0000","last_active":"2026-03-08T23:03:37.603277+0000","last_peered":"2026-03-08T23:03:37.603277+0000","last_clean":"2026-03-08T23:03:37.603277+0000","last_became_active":"2026-03-08T23:03:11.321586+0000","last_became_peered":"2026-03-08T23:03:11.321586+0000","last_unstale":"2026-03-08T23:03:37.603277+0000","last_undegraded":"2026-03-08T23:03:37.603277+0000","last_fullsized":"2026-03-08T23:03:37.603277+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:18:33.405135+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"62'19","reported_seq":65,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599055+0000","last_change":"2026-03-08T23:03:07.158372+0000","last_active":"2026-03-08T23:03:37.599055+0000","last_peered":"2026-03-08T23:03:37.599055+0000","last_clean":"2026-03-08T23:03:37.599055+0000","last_became_active":"2026-03-08T23:03:07.148537+0000","last_became_peered":"2026-03-08T23:03:07.148537+0000","last_unstale":"2026-03-08T23:03:37.599055+0000","last_undegraded":"2026-03-08T23:03:37.599055+0000","last_fullsized":"2026-03-08T23:03:37.599055+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:54:19.176556+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706602+0000","last_change":"2026-03-08T23:03:05.258324+0000","last_active":"2026-03-08T23:03:37.706602+0000","last_peered":"2026-03-08T23:03:37.706602+0000","last_clean":"2026-03-08T23:03:37.706602+0000","last_became_active":"2026-03-08T23:03:05.258194+0000","last_became_peered":"2026-03-08T23:03:05.258194+0000","last_unstale":"2026-03-08T23:03:37.706602+0000","last_undegraded":"2026-03-08T23:03:37.706602+0000","last_fullsized":"2026-03-08T23:03:37.706602+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:43:59.696656+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603134+0000","last_change":"2026-03-08T23:03:09.257843+0000","last_active":"2026-03-08T23:03:37.603134+0000","last_peered":"2026-03-08T23:03:37.603134+0000","last_clean":"2026-03-08T23:03:37.603134+0000","last_became_active":"2026-03-08T23:03:09.257057+0000","last_became_peered":"2026-03-08T23:03:09.257057+0000","last_unstale":"2026-03-08T23:03:37.603134+0000","last_undegraded":"2026-03-08T23:03:37.603134+0000","last_fullsized":"2026-03-08T23:03:37.603134+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:26:30.827034+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705397+0000","last_change":"2026-03-08T23:03:11.322587+0000","last_active":"2026-03-08T23:03:37.705397+0000","last_peered":"2026-03-08T23:03:37.705397+0000","last_clean":"2026-03-08T23:03:37.705397+0000","last_became_active":"2026-03-08T23:03:11.321815+0000","last_became_peered":"2026-03-08T23:03:11.321815+0000","last_unstale":"2026-03-08T23:03:37.705397+0000","last_undegraded":"2026-03-08T23:03:37.705397+0000","last_fullsized":"2026-03-08T23:03:37.705397+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:06:24.094455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"62'18","reported_seq":61,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703630+0000","last_change":"2026-03-08T23:03:07.143410+0000","last_active":"2026-03-08T23:03:37.703630+0000","last_peered":"2026-03-08T23:03:37.703630+0000","last_clean":"2026-03-08T23:03:37.703630+0000","last_became_active":"2026-03-08T23:03:07.143316+0000","last_became_peered":"2026-03-08T23:03:07.143316+0000","last_unstale":"2026-03-08T23:03:37.703630+0000","last_undegraded":"2026-03-08T23:03:37.703630+0000","last_fullsized":"2026-03-08T23:03:37.703630+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:53:40.400754+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604399+0000","last_change":"2026-03-08T23:03:05.271515+0000","last_active":"2026-03-08T23:03:37.604399+0000","last_peered":"2026-03-08T23:03:37.604399+0000","last_clean":"2026-03-08T23:03:37.604399+0000","last_became_active":"2026-03-08T23:03:05.258158+0000","last_became_peered":"2026-03-08T23:03:05.258158+0000","last_unstale":"2026-03-08T23:03:37.604399+0000","last_undegraded":"2026-03-08T23:03:37.604399+0000","last_fullsized":"2026-03-08T23:03:37.604399+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:08:43.394496+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604463+0000","last_change":"2026-03-08T23:03:09.269852+0000","last_active":"2026-03-08T23:03:37.604463+0000","last_peered":"2026-03-08T23:03:37.604463+0000","last_clean":"2026-03-08T23:03:37.604463+0000","last_became_active":"2026-03-08T23:03:09.269760+0000","last_became_peered":"2026-03-08T23:03:09.269760+0000","last_unstale":"2026-03-08T23:03:37.604463+0000","last_undegraded":"2026-03-08T23:03:37.604463+0000","last_fullsized":"2026-03-08T23:03:37.604463+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:23:48.614808+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704723+0000","last_change":"2026-03-08T23:03:11.572880+0000","last_active":"2026-03-08T23:03:37.704723+0000","last_peered":"2026-03-08T23:03:37.704723+0000","last_clean":"2026-03-08T23:03:37.704723+0000","last_became_active":"2026-03-08T23:03:11.572659+0000","last_became_peered":"2026-03-08T23:03:11.572659+0000","last_unstale":"2026-03-08T23:03:37.704723+0000","last_undegraded":"2026-03-08T23:03:37.704723+0000","last_fullsized":"2026-03-08T23:03:37.704723+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:01:18.837585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":50,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605037+0000","last_change":"2026-03-08T23:03:07.150136+0000","last_active":"2026-03-08T23:03:37.605037+0000","last_peered":"2026-03-08T23:03:37.605037+0000","last_clean":"2026-03-08T23:03:37.605037+0000","last_became_active":"2026-03-08T23:03:07.149918+0000","last_became_peered":"2026-03-08T23:03:07.149918+0000","last_unstale":"2026-03-08T23:03:37.605037+0000","last_undegraded":"2026-03-08T23:03:37.605037+0000","last_fullsized":"2026-03-08T23:03:37.605037+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:03:31.384337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704309+0000","last_change":"2026-03-08T23:03:05.352768+0000","last_active":"2026-03-08T23:03:37.704309+0000","last_peered":"2026-03-08T23:03:37.704309+0000","last_clean":"2026-03-08T23:03:37.704309+0000","last_became_active":"2026-03-08T23:03:05.352289+0000","last_became_peered":"2026-03-08T23:03:05.352289+0000","last_unstale":"2026-03-08T23:03:37.704309+0000","last_undegraded":"2026-03-08T23:03:37.704309+0000","last_fullsized":"2026-03-08T23:03:37.704309+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:13:37.446786+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706772+0000","last_change":"2026-03-08T23:03:09.259498+0000","last_active":"2026-03-08T23:03:37.706772+0000","last_peered":"2026-03-08T23:03:37.706772+0000","last_clean":"2026-03-08T23:03:37.706772+0000","last_became_active":"2026-03-08T23:03:09.259373+0000","last_became_peered":"2026-03-08T23:03:09.259373+0000","last_unstale":"2026-03-08T23:03:37.706772+0000","last_undegraded":"2026-03-08T23:03:37.706772+0000","last_fullsized":"2026-03-08T23:03:37.706772+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:14:51.399825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704060+0000","last_change":"2026-03-08T23:03:11.322809+0000","last_active":"2026-03-08T23:03:37.704060+0000","last_peered":"2026-03-08T23:03:37.704060+0000","last_clean":"2026-03-08T23:03:37.704060+0000","last_became_active":"2026-03-08T23:03:11.322694+0000","last_became_peered":"2026-03-08T23:03:11.322694+0000","last_unstale":"2026-03-08T23:03:37.704060+0000","last_undegraded":"2026-03-08T23:03:37.704060+0000","last_fullsized":"2026-03-08T23:03:37.704060+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:23:45.443972+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705996+0000","last_change":"2026-03-08T23:03:07.144247+0000","last_active":"2026-03-08T23:03:37.705996+0000","last_peered":"2026-03-08T23:03:37.705996+0000","last_clean":"2026-03-08T23:03:37.705996+0000","last_became_active":"2026-03-08T23:03:07.144056+0000","last_became_peered":"2026-03-08T23:03:07.144056+0000","last_unstale":"2026-03-08T23:03:37.705996+0000","last_undegraded":"2026-03-08T23:03:37.705996+0000","last_fullsized":"2026-03-08T23:03:37.705996+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:13:49.239945+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706226+0000","last_change":"2026-03-08T23:03:05.261439+0000","last_active":"2026-03-08T23:03:37.706226+0000","last_peered":"2026-03-08T23:03:37.706226+0000","last_clean":"2026-03-08T23:03:37.706226+0000","last_became_active":"2026-03-08T23:03:05.261305+0000","last_became_peered":"2026-03-08T23:03:05.261305+0000","last_unstale":"2026-03-08T23:03:37.706226+0000","last_undegraded":"2026-03-08T23:03:37.706226+0000","last_fullsized":"2026-03-08T23:03:37.706226+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:04:41.881046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"66'39","reported_seq":68,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:39.956568+0000","last_change":"2026-03-08T23:02:44.478154+0000","last_active":"2026-03-08T23:03:39.956568+0000","last_peered":"2026-03-08T23:03:39.956568+0000","last_clean":"2026-03-08T23:03:39.956568+0000","last_became_active":"2026-03-08T23:02:44.468283+0000","last_became_peered":"2026-03-08T23:02:44.468283+0000","last_unstale":"2026-03-08T23:03:39.956568+0000","last_undegraded":"2026-03-08T23:03:39.956568+0000","last_fullsized":"2026-03-08T23:03:39.956568+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T22:59:49.649662+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T22:59:49.649662+0000","last_clean_scrub_stamp":"2026-03-08T22:59:49.649662+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:14:01.275386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704463+0000","last_change":"2026-03-08T23:03:09.276138+0000","last_active":"2026-03-08T23:03:37.704463+0000","last_peered":"2026-03-08T23:03:37.704463+0000","last_clean":"2026-03-08T23:03:37.704463+0000","last_became_active":"2026-03-08T23:03:09.276065+0000","last_became_peered":"2026-03-08T23:03:09.276065+0000","last_unstale":"2026-03-08T23:03:37.704463+0000","last_undegraded":"2026-03-08T23:03:37.704463+0000","last_fullsized":"2026-03-08T23:03:37.704463+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:45:59.167461+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706137+0000","last_change":"2026-03-08T23:03:11.326007+0000","last_active":"2026-03-08T23:03:37.706137+0000","last_peered":"2026-03-08T23:03:37.706137+0000","last_clean":"2026-03-08T23:03:37.706137+0000","last_became_active":"2026-03-08T23:03:11.325511+0000","last_became_peered":"2026-03-08T23:03:11.325511+0000","last_unstale":"2026-03-08T23:03:37.706137+0000","last_undegraded":"2026-03-08T23:03:37.706137+0000","last_fullsized":"2026-03-08T23:03:37.706137+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:34:48.464423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"62'17","reported_seq":57,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703606+0000","last_change":"2026-03-08T23:03:07.144995+0000","last_active":"2026-03-08T23:03:37.703606+0000","last_peered":"2026-03-08T23:03:37.703606+0000","last_clean":"2026-03-08T23:03:37.703606+0000","last_became_active":"2026-03-08T23:03:07.143922+0000","last_became_peered":"2026-03-08T23:03:07.143922+0000","last_unstale":"2026-03-08T23:03:37.703606+0000","last_undegraded":"2026-03-08T23:03:37.703606+0000","last_fullsized":"2026-03-08T23:03:37.703606+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:29:17.453857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604333+0000","last_change":"2026-03-08T23:03:05.261381+0000","last_active":"2026-03-08T23:03:37.604333+0000","last_peered":"2026-03-08T23:03:37.604333+0000","last_clean":"2026-03-08T23:03:37.604333+0000","last_became_active":"2026-03-08T23:03:05.261221+0000","last_became_peered":"2026-03-08T23:03:05.261221+0000","last_unstale":"2026-03-08T23:03:37.604333+0000","last_undegraded":"2026-03-08T23:03:37.604333+0000","last_fullsized":"2026-03-08T23:03:37.604333+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:21:09.467745+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604355+0000","last_change":"2026-03-08T23:03:09.260311+0000","last_active":"2026-03-08T23:03:37.604355+0000","last_peered":"2026-03-08T23:03:37.604355+0000","last_clean":"2026-03-08T23:03:37.604355+0000","last_became_active":"2026-03-08T23:03:09.260169+0000","last_became_peered":"2026-03-08T23:03:09.260169+0000","last_unstale":"2026-03-08T23:03:37.604355+0000","last_undegraded":"2026-03-08T23:03:37.604355+0000","last_fullsized":"2026-03-08T23:03:37.604355+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T11:00:09.782806+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703666+0000","last_change":"2026-03-08T23:03:11.330620+0000","last_active":"2026-03-08T23:03:37.703666+0000","last_peered":"2026-03-08T23:03:37.703666+0000","last_clean":"2026-03-08T23:03:37.703666+0000","last_became_active":"2026-03-08T23:03:11.328178+0000","last_became_peered":"2026-03-08T23:03:11.328178+0000","last_unstale":"2026-03-08T23:03:37.703666+0000","last_undegraded":"2026-03-08T23:03:37.703666+0000","last_fullsized":"2026-03-08T23:03:37.703666+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:43:49.415980+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706629+0000","last_change":"2026-03-08T23:03:07.147284+0000","last_active":"2026-03-08T23:03:37.706629+0000","last_peered":"2026-03-08T23:03:37.706629+0000","last_clean":"2026-03-08T23:03:37.706629+0000","last_became_active":"2026-03-08T23:03:07.147125+0000","last_became_peered":"2026-03-08T23:03:07.147125+0000","last_unstale":"2026-03-08T23:03:37.706629+0000","last_undegraded":"2026-03-08T23:03:37.706629+0000","last_fullsized":"2026-03-08T23:03:37.706629+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:08:08.030833+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704184+0000","last_change":"2026-03-08T23:03:05.253779+0000","last_active":"2026-03-08T23:03:37.704184+0000","last_peered":"2026-03-08T23:03:37.704184+0000","last_clean":"2026-03-08T23:03:37.704184+0000","last_became_active":"2026-03-08T23:03:05.253599+0000","last_became_peered":"2026-03-08T23:03:05.253599+0000","last_unstale":"2026-03-08T23:03:37.704184+0000","last_undegraded":"2026-03-08T23:03:37.704184+0000","last_fullsized":"2026-03-08T23:03:37.704184+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:24:25.849096+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604804+0000","last_change":"2026-03-08T23:03:09.263804+0000","last_active":"2026-03-08T23:03:37.604804+0000","last_peered":"2026-03-08T23:03:37.604804+0000","last_clean":"2026-03-08T23:03:37.604804+0000","last_became_active":"2026-03-08T23:03:09.263712+0000","last_became_peered":"2026-03-08T23:03:09.263712+0000","last_unstale":"2026-03-08T23:03:37.604804+0000","last_undegraded":"2026-03-08T23:03:37.604804+0000","last_fullsized":"2026-03-08T23:03:37.604804+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:58:29.178345+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602316+0000","last_change":"2026-03-08T23:03:11.306247+0000","last_active":"2026-03-08T23:03:37.602316+0000","last_peered":"2026-03-08T23:03:37.602316+0000","last_clean":"2026-03-08T23:03:37.602316+0000","last_became_active":"2026-03-08T23:03:11.306008+0000","last_became_peered":"2026-03-08T23:03:11.306008+0000","last_unstale":"2026-03-08T23:03:37.602316+0000","last_undegraded":"2026-03-08T23:03:37.602316+0000","last_fullsized":"2026-03-08T23:03:37.602316+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:37:02.509976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"63'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704111+0000","last_change":"2026-03-08T23:03:07.143719+0000","last_active":"2026-03-08T23:03:37.704111+0000","last_peered":"2026-03-08T23:03:37.704111+0000","last_clean":"2026-03-08T23:03:37.704111+0000","last_became_active":"2026-03-08T23:03:07.142277+0000","last_became_peered":"2026-03-08T23:03:07.142277+0000","last_unstale":"2026-03-08T23:03:37.704111+0000","last_undegraded":"2026-03-08T23:03:37.704111+0000","last_fullsized":"2026-03-08T23:03:37.704111+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:11:36.105278+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604346+0000","last_change":"2026-03-08T23:03:05.257448+0000","last_active":"2026-03-08T23:03:37.604346+0000","last_peered":"2026-03-08T23:03:37.604346+0000","last_clean":"2026-03-08T23:03:37.604346+0000","last_became_active":"2026-03-08T23:03:05.257269+0000","last_became_peered":"2026-03-08T23:03:05.257269+0000","last_unstale":"2026-03-08T23:03:37.604346+0000","last_undegraded":"2026-03-08T23:03:37.604346+0000","last_fullsized":"2026-03-08T23:03:37.604346+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:40:22.064448+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.395985+0000","last_change":"2026-03-08T23:03:09.276273+0000","last_active":"2026-03-08T23:04:15.395985+0000","last_peered":"2026-03-08T23:04:15.395985+0000","last_clean":"2026-03-08T23:04:15.395985+0000","last_became_active":"2026-03-08T23:03:09.276207+0000","last_became_peered":"2026-03-08T23:03:09.276207+0000","last_unstale":"2026-03-08T23:04:15.395985+0000","last_undegraded":"2026-03-08T23:04:15.395985+0000","last_fullsized":"2026-03-08T23:04:15.395985+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:37:08.341613+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705072+0000","last_change":"2026-03-08T23:03:11.569969+0000","last_active":"2026-03-08T23:03:37.705072+0000","last_peered":"2026-03-08T23:03:37.705072+0000","last_clean":"2026-03-08T23:03:37.705072+0000","last_became_active":"2026-03-08T23:03:11.569732+0000","last_became_peered":"2026-03-08T23:03:11.569732+0000","last_unstale":"2026-03-08T23:03:37.705072+0000","last_undegraded":"2026-03-08T23:03:37.705072+0000","last_fullsized":"2026-03-08T23:03:37.705072+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:44:10.961590+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704195+0000","last_change":"2026-03-08T23:03:07.145142+0000","last_active":"2026-03-08T23:03:37.704195+0000","last_peered":"2026-03-08T23:03:37.704195+0000","last_clean":"2026-03-08T23:03:37.704195+0000","last_became_active":"2026-03-08T23:03:07.141502+0000","last_became_peered":"2026-03-08T23:03:07.141502+0000","last_unstale":"2026-03-08T23:03:37.704195+0000","last_undegraded":"2026-03-08T23:03:37.704195+0000","last_fullsized":"2026-03-08T23:03:37.704195+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:38:18.481763+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"55'2","reported_seq":49,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599217+0000","last_change":"2026-03-08T23:03:05.256739+0000","last_active":"2026-03-08T23:03:37.599217+0000","last_peered":"2026-03-08T23:03:37.599217+0000","last_clean":"2026-03-08T23:03:37.599217+0000","last_became_active":"2026-03-08T23:03:05.256600+0000","last_became_peered":"2026-03-08T23:03:05.256600+0000","last_unstale":"2026-03-08T23:03:37.599217+0000","last_undegraded":"2026-03-08T23:03:37.599217+0000","last_fullsized":"2026-03-08T23:03:37.599217+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:36:37.504041+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604750+0000","last_change":"2026-03-08T23:03:09.255626+0000","last_active":"2026-03-08T23:03:37.604750+0000","last_peered":"2026-03-08T23:03:37.604750+0000","last_clean":"2026-03-08T23:03:37.604750+0000","last_became_active":"2026-03-08T23:03:09.255474+0000","last_became_peered":"2026-03-08T23:03:09.255474+0000","last_unstale":"2026-03-08T23:03:37.604750+0000","last_undegraded":"2026-03-08T23:03:37.604750+0000","last_fullsized":"2026-03-08T23:03:37.604750+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:39:05.509820+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705379+0000","last_change":"2026-03-08T23:03:11.328263+0000","last_active":"2026-03-08T23:03:37.705379+0000","last_peered":"2026-03-08T23:03:37.705379+0000","last_clean":"2026-03-08T23:03:37.705379+0000","last_became_active":"2026-03-08T23:03:11.324286+0000","last_became_peered":"2026-03-08T23:03:11.324286+0000","last_unstale":"2026-03-08T23:03:37.705379+0000","last_undegraded":"2026-03-08T23:03:37.705379+0000","last_fullsized":"2026-03-08T23:03:37.705379+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:13:09.153031+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703500+0000","last_change":"2026-03-08T23:03:07.144009+0000","last_active":"2026-03-08T23:03:37.703500+0000","last_peered":"2026-03-08T23:03:37.703500+0000","last_clean":"2026-03-08T23:03:37.703500+0000","last_became_active":"2026-03-08T23:03:07.143612+0000","last_became_peered":"2026-03-08T23:03:07.143612+0000","last_unstale":"2026-03-08T23:03:37.703500+0000","last_undegraded":"2026-03-08T23:03:37.703500+0000","last_fullsized":"2026-03-08T23:03:37.703500+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:25:47.730982+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604253+0000","last_change":"2026-03-08T23:03:05.258314+0000","last_active":"2026-03-08T23:03:37.604253+0000","last_peered":"2026-03-08T23:03:37.604253+0000","last_clean":"2026-03-08T23:03:37.604253+0000","last_became_active":"2026-03-08T23:03:05.257953+0000","last_became_peered":"2026-03-08T23:03:05.257953+0000","last_unstale":"2026-03-08T23:03:37.604253+0000","last_undegraded":"2026-03-08T23:03:37.604253+0000","last_fullsized":"2026-03-08T23:03:37.604253+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:38:01.823337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706439+0000","last_change":"2026-03-08T23:03:09.274088+0000","last_active":"2026-03-08T23:03:37.706439+0000","last_peered":"2026-03-08T23:03:37.706439+0000","last_clean":"2026-03-08T23:03:37.706439+0000","last_became_active":"2026-03-08T23:03:09.273987+0000","last_became_peered":"2026-03-08T23:03:09.273987+0000","last_unstale":"2026-03-08T23:03:37.706439+0000","last_undegraded":"2026-03-08T23:03:37.706439+0000","last_fullsized":"2026-03-08T23:03:37.706439+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:22:11.788506+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604276+0000","last_change":"2026-03-08T23:03:11.309936+0000","last_active":"2026-03-08T23:03:37.604276+0000","last_peered":"2026-03-08T23:03:37.604276+0000","last_clean":"2026-03-08T23:03:37.604276+0000","last_became_active":"2026-03-08T23:03:11.308216+0000","last_became_peered":"2026-03-08T23:03:11.308216+0000","last_unstale":"2026-03-08T23:03:37.604276+0000","last_undegraded":"2026-03-08T23:03:37.604276+0000","last_fullsized":"2026-03-08T23:03:37.604276+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:38:50.167911+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"62'4","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703585+0000","last_change":"2026-03-08T23:03:07.151427+0000","last_active":"2026-03-08T23:03:37.703585+0000","last_peered":"2026-03-08T23:03:37.703585+0000","last_clean":"2026-03-08T23:03:37.703585+0000","last_became_active":"2026-03-08T23:03:07.151150+0000","last_became_peered":"2026-03-08T23:03:07.151150+0000","last_unstale":"2026-03-08T23:03:37.703585+0000","last_undegraded":"2026-03-08T23:03:37.703585+0000","last_fullsized":"2026-03-08T23:03:37.703585+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:24:31.401635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703550+0000","last_change":"2026-03-08T23:03:05.258410+0000","last_active":"2026-03-08T23:03:37.703550+0000","last_peered":"2026-03-08T23:03:37.703550+0000","last_clean":"2026-03-08T23:03:37.703550+0000","last_became_active":"2026-03-08T23:03:05.258203+0000","last_became_peered":"2026-03-08T23:03:05.258203+0000","last_unstale":"2026-03-08T23:03:37.703550+0000","last_undegraded":"2026-03-08T23:03:37.703550+0000","last_fullsized":"2026-03-08T23:03:37.703550+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:34:43.426775+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706916+0000","last_change":"2026-03-08T23:03:09.266168+0000","last_active":"2026-03-08T23:03:37.706916+0000","last_peered":"2026-03-08T23:03:37.706916+0000","last_clean":"2026-03-08T23:03:37.706916+0000","last_became_active":"2026-03-08T23:03:09.262680+0000","last_became_peered":"2026-03-08T23:03:09.262680+0000","last_unstale":"2026-03-08T23:03:37.706916+0000","last_undegraded":"2026-03-08T23:03:37.706916+0000","last_fullsized":"2026-03-08T23:03:37.706916+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:12:56.958547+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704825+0000","last_change":"2026-03-08T23:03:11.573043+0000","last_active":"2026-03-08T23:03:37.704825+0000","last_peered":"2026-03-08T23:03:37.704825+0000","last_clean":"2026-03-08T23:03:37.704825+0000","last_became_active":"2026-03-08T23:03:11.572931+0000","last_became_peered":"2026-03-08T23:03:11.572931+0000","last_unstale":"2026-03-08T23:03:37.704825+0000","last_undegraded":"2026-03-08T23:03:37.704825+0000","last_fullsized":"2026-03-08T23:03:37.704825+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:06:15.485642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703814+0000","last_change":"2026-03-08T23:03:07.143810+0000","last_active":"2026-03-08T23:03:37.703814+0000","last_peered":"2026-03-08T23:03:37.703814+0000","last_clean":"2026-03-08T23:03:37.703814+0000","last_became_active":"2026-03-08T23:03:07.142444+0000","last_became_peered":"2026-03-08T23:03:07.142444+0000","last_unstale":"2026-03-08T23:03:37.703814+0000","last_undegraded":"2026-03-08T23:03:37.703814+0000","last_fullsized":"2026-03-08T23:03:37.703814+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:01:50.409742+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706424+0000","last_change":"2026-03-08T23:03:05.267361+0000","last_active":"2026-03-08T23:03:37.706424+0000","last_peered":"2026-03-08T23:03:37.706424+0000","last_clean":"2026-03-08T23:03:37.706424+0000","last_became_active":"2026-03-08T23:03:05.267277+0000","last_became_peered":"2026-03-08T23:03:05.267277+0000","last_unstale":"2026-03-08T23:03:37.706424+0000","last_undegraded":"2026-03-08T23:03:37.706424+0000","last_fullsized":"2026-03-08T23:03:37.706424+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:07:04.245505+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.395984+0000","last_change":"2026-03-08T23:03:09.259441+0000","last_active":"2026-03-08T23:04:15.395984+0000","last_peered":"2026-03-08T23:04:15.395984+0000","last_clean":"2026-03-08T23:04:15.395984+0000","last_became_active":"2026-03-08T23:03:09.259276+0000","last_became_peered":"2026-03-08T23:03:09.259276+0000","last_unstale":"2026-03-08T23:04:15.395984+0000","last_undegraded":"2026-03-08T23:04:15.395984+0000","last_fullsized":"2026-03-08T23:04:15.395984+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:07:44.244730+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602444+0000","last_change":"2026-03-08T23:03:11.320671+0000","last_active":"2026-03-08T23:03:37.602444+0000","last_peered":"2026-03-08T23:03:37.602444+0000","last_clean":"2026-03-08T23:03:37.602444+0000","last_became_active":"2026-03-08T23:03:11.320537+0000","last_became_peered":"2026-03-08T23:03:11.320537+0000","last_unstale":"2026-03-08T23:03:37.602444+0000","last_undegraded":"2026-03-08T23:03:37.602444+0000","last_fullsized":"2026-03-08T23:03:37.602444+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:11:41.849283+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603023+0000","last_change":"2026-03-08T23:03:07.139366+0000","last_active":"2026-03-08T23:03:37.603023+0000","last_peered":"2026-03-08T23:03:37.603023+0000","last_clean":"2026-03-08T23:03:37.603023+0000","last_became_active":"2026-03-08T23:03:07.138442+0000","last_became_peered":"2026-03-08T23:03:07.138442+0000","last_unstale":"2026-03-08T23:03:37.603023+0000","last_undegraded":"2026-03-08T23:03:37.603023+0000","last_fullsized":"2026-03-08T23:03:37.603023+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:25:17.244273+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602548+0000","last_change":"2026-03-08T23:03:05.265019+0000","last_active":"2026-03-08T23:03:37.602548+0000","last_peered":"2026-03-08T23:03:37.602548+0000","last_clean":"2026-03-08T23:03:37.602548+0000","last_became_active":"2026-03-08T23:03:05.256390+0000","last_became_peered":"2026-03-08T23:03:05.256390+0000","last_unstale":"2026-03-08T23:03:37.602548+0000","last_undegraded":"2026-03-08T23:03:37.602548+0000","last_fullsized":"2026-03-08T23:03:37.602548+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:55:55.988563+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"63'11","reported_seq":51,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.397206+0000","last_change":"2026-03-08T23:03:09.270946+0000","last_active":"2026-03-08T23:04:15.397206+0000","last_peered":"2026-03-08T23:04:15.397206+0000","last_clean":"2026-03-08T23:04:15.397206+0000","last_became_active":"2026-03-08T23:03:09.270872+0000","last_became_peered":"2026-03-08T23:03:09.270872+0000","last_unstale":"2026-03-08T23:04:15.397206+0000","last_undegraded":"2026-03-08T23:04:15.397206+0000","last_fullsized":"2026-03-08T23:04:15.397206+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:03:28.504294+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598900+0000","last_change":"2026-03-08T23:03:11.309563+0000","last_active":"2026-03-08T23:03:37.598900+0000","last_peered":"2026-03-08T23:03:37.598900+0000","last_clean":"2026-03-08T23:03:37.598900+0000","last_became_active":"2026-03-08T23:03:11.309130+0000","last_became_peered":"2026-03-08T23:03:11.309130+0000","last_unstale":"2026-03-08T23:03:37.598900+0000","last_undegraded":"2026-03-08T23:03:37.598900+0000","last_fullsized":"2026-03-08T23:03:37.598900+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:16:53.939886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703431+0000","last_change":"2026-03-08T23:03:07.147429+0000","last_active":"2026-03-08T23:03:37.703431+0000","last_peered":"2026-03-08T23:03:37.703431+0000","last_clean":"2026-03-08T23:03:37.703431+0000","last_became_active":"2026-03-08T23:03:07.145588+0000","last_became_peered":"2026-03-08T23:03:07.145588+0000","last_unstale":"2026-03-08T23:03:37.703431+0000","last_undegraded":"2026-03-08T23:03:37.703431+0000","last_fullsized":"2026-03-08T23:03:37.703431+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:45:10.275131+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703485+0000","last_change":"2026-03-08T23:03:05.262476+0000","last_active":"2026-03-08T23:03:37.703485+0000","last_peered":"2026-03-08T23:03:37.703485+0000","last_clean":"2026-03-08T23:03:37.703485+0000","last_became_active":"2026-03-08T23:03:05.261974+0000","last_became_peered":"2026-03-08T23:03:05.261974+0000","last_unstale":"2026-03-08T23:03:37.703485+0000","last_undegraded":"2026-03-08T23:03:37.703485+0000","last_fullsized":"2026-03-08T23:03:37.703485+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:54:13.829359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705784+0000","last_change":"2026-03-08T23:03:09.266934+0000","last_active":"2026-03-08T23:03:37.705784+0000","last_peered":"2026-03-08T23:03:37.705784+0000","last_clean":"2026-03-08T23:03:37.705784+0000","last_became_active":"2026-03-08T23:03:09.266834+0000","last_became_peered":"2026-03-08T23:03:09.266834+0000","last_unstale":"2026-03-08T23:03:37.705784+0000","last_undegraded":"2026-03-08T23:03:37.705784+0000","last_fullsized":"2026-03-08T23:03:37.705784+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:24:53.488537+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602355+0000","last_change":"2026-03-08T23:03:11.308951+0000","last_active":"2026-03-08T23:03:37.602355+0000","last_peered":"2026-03-08T23:03:37.602355+0000","last_clean":"2026-03-08T23:03:37.602355+0000","last_became_active":"2026-03-08T23:03:11.308812+0000","last_became_peered":"2026-03-08T23:03:11.308812+0000","last_unstale":"2026-03-08T23:03:37.602355+0000","last_undegraded":"2026-03-08T23:03:37.602355+0000","last_fullsized":"2026-03-08T23:03:37.602355+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:19:48.617927+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598450+0000","last_change":"2026-03-08T23:03:07.148637+0000","last_active":"2026-03-08T23:03:37.598450+0000","last_peered":"2026-03-08T23:03:37.598450+0000","last_clean":"2026-03-08T23:03:37.598450+0000","last_became_active":"2026-03-08T23:03:07.146995+0000","last_became_peered":"2026-03-08T23:03:07.146995+0000","last_unstale":"2026-03-08T23:03:37.598450+0000","last_undegraded":"2026-03-08T23:03:37.598450+0000","last_fullsized":"2026-03-08T23:03:37.598450+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:29:19.989973+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704029+0000","last_change":"2026-03-08T23:03:05.272759+0000","last_active":"2026-03-08T23:03:37.704029+0000","last_peered":"2026-03-08T23:03:37.704029+0000","last_clean":"2026-03-08T23:03:37.704029+0000","last_became_active":"2026-03-08T23:03:05.272579+0000","last_became_peered":"2026-03-08T23:03:05.272579+0000","last_unstale":"2026-03-08T23:03:37.704029+0000","last_undegraded":"2026-03-08T23:03:37.704029+0000","last_fullsized":"2026-03-08T23:03:37.704029+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:15:34.349919+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703999+0000","last_change":"2026-03-08T23:03:09.268588+0000","last_active":"2026-03-08T23:03:37.703999+0000","last_peered":"2026-03-08T23:03:37.703999+0000","last_clean":"2026-03-08T23:03:37.703999+0000","last_became_active":"2026-03-08T23:03:09.267944+0000","last_became_peered":"2026-03-08T23:03:09.267944+0000","last_unstale":"2026-03-08T23:03:37.703999+0000","last_undegraded":"2026-03-08T23:03:37.703999+0000","last_fullsized":"2026-03-08T23:03:37.703999+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:10:07.451790+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705450+0000","last_change":"2026-03-08T23:03:11.329191+0000","last_active":"2026-03-08T23:03:37.705450+0000","last_peered":"2026-03-08T23:03:37.705450+0000","last_clean":"2026-03-08T23:03:37.705450+0000","last_became_active":"2026-03-08T23:03:11.320409+0000","last_became_peered":"2026-03-08T23:03:11.320409+0000","last_unstale":"2026-03-08T23:03:37.705450+0000","last_undegraded":"2026-03-08T23:03:37.705450+0000","last_fullsized":"2026-03-08T23:03:37.705450+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:11:25.131608+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"62'6","reported_seq":38,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605206+0000","last_change":"2026-03-08T23:03:07.143064+0000","last_active":"2026-03-08T23:03:37.605206+0000","last_peered":"2026-03-08T23:03:37.605206+0000","last_clean":"2026-03-08T23:03:37.605206+0000","last_became_active":"2026-03-08T23:03:07.142940+0000","last_became_peered":"2026-03-08T23:03:07.142940+0000","last_unstale":"2026-03-08T23:03:37.605206+0000","last_undegraded":"2026-03-08T23:03:37.605206+0000","last_fullsized":"2026-03-08T23:03:37.605206+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T11:00:44.970445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706385+0000","last_change":"2026-03-08T23:03:05.260055+0000","last_active":"2026-03-08T23:03:37.706385+0000","last_peered":"2026-03-08T23:03:37.706385+0000","last_clean":"2026-03-08T23:03:37.706385+0000","last_became_active":"2026-03-08T23:03:05.259828+0000","last_became_peered":"2026-03-08T23:03:05.259828+0000","last_unstale":"2026-03-08T23:03:37.706385+0000","last_undegraded":"2026-03-08T23:03:37.706385+0000","last_fullsized":"2026-03-08T23:03:37.706385+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:28:36.850293+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703146+0000","last_change":"2026-03-08T23:03:09.269178+0000","last_active":"2026-03-08T23:03:37.703146+0000","last_peered":"2026-03-08T23:03:37.703146+0000","last_clean":"2026-03-08T23:03:37.703146+0000","last_became_active":"2026-03-08T23:03:09.269028+0000","last_became_peered":"2026-03-08T23:03:09.269028+0000","last_unstale":"2026-03-08T23:03:37.703146+0000","last_undegraded":"2026-03-08T23:03:37.703146+0000","last_fullsized":"2026-03-08T23:03:37.703146+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:47:42.502854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704767+0000","last_change":"2026-03-08T23:03:11.310893+0000","last_active":"2026-03-08T23:03:37.704767+0000","last_peered":"2026-03-08T23:03:37.704767+0000","last_clean":"2026-03-08T23:03:37.704767+0000","last_became_active":"2026-03-08T23:03:11.310635+0000","last_became_peered":"2026-03-08T23:03:11.310635+0000","last_unstale":"2026-03-08T23:03:37.704767+0000","last_undegraded":"2026-03-08T23:03:37.704767+0000","last_fullsized":"2026-03-08T23:03:37.704767+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:45:44.342211+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706442+0000","last_change":"2026-03-08T23:03:07.150402+0000","last_active":"2026-03-08T23:03:37.706442+0000","last_peered":"2026-03-08T23:03:37.706442+0000","last_clean":"2026-03-08T23:03:37.706442+0000","last_became_active":"2026-03-08T23:03:07.150221+0000","last_became_peered":"2026-03-08T23:03:07.150221+0000","last_unstale":"2026-03-08T23:03:37.706442+0000","last_undegraded":"2026-03-08T23:03:37.706442+0000","last_fullsized":"2026-03-08T23:03:37.706442+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:29:03.387632+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703388+0000","last_change":"2026-03-08T23:03:05.262601+0000","last_active":"2026-03-08T23:03:37.703388+0000","last_peered":"2026-03-08T23:03:37.703388+0000","last_clean":"2026-03-08T23:03:37.703388+0000","last_became_active":"2026-03-08T23:03:05.262123+0000","last_became_peered":"2026-03-08T23:03:05.262123+0000","last_unstale":"2026-03-08T23:03:37.703388+0000","last_undegraded":"2026-03-08T23:03:37.703388+0000","last_fullsized":"2026-03-08T23:03:37.703388+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:24:01.937092+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704679+0000","last_change":"2026-03-08T23:03:09.270049+0000","last_active":"2026-03-08T23:03:37.704679+0000","last_peered":"2026-03-08T23:03:37.704679+0000","last_clean":"2026-03-08T23:03:37.704679+0000","last_became_active":"2026-03-08T23:03:09.269942+0000","last_became_peered":"2026-03-08T23:03:09.269942+0000","last_unstale":"2026-03-08T23:03:37.704679+0000","last_undegraded":"2026-03-08T23:03:37.704679+0000","last_fullsized":"2026-03-08T23:03:37.704679+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:50:18.154003+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705328+0000","last_change":"2026-03-08T23:03:11.571867+0000","last_active":"2026-03-08T23:03:37.705328+0000","last_peered":"2026-03-08T23:03:37.705328+0000","last_clean":"2026-03-08T23:03:37.705328+0000","last_became_active":"2026-03-08T23:03:11.571109+0000","last_became_peered":"2026-03-08T23:03:11.571109+0000","last_unstale":"2026-03-08T23:03:37.705328+0000","last_undegraded":"2026-03-08T23:03:37.705328+0000","last_fullsized":"2026-03-08T23:03:37.705328+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:08:31.446222+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"62'1","reported_seq":23,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703935+0000","last_change":"2026-03-08T23:03:11.310830+0000","last_active":"2026-03-08T23:03:37.703935+0000","last_peered":"2026-03-08T23:03:37.703935+0000","last_clean":"2026-03-08T23:03:37.703935+0000","last_became_active":"2026-03-08T23:03:11.310516+0000","last_became_peered":"2026-03-08T23:03:11.310516+0000","last_unstale":"2026-03-08T23:03:37.703935+0000","last_undegraded":"2026-03-08T23:03:37.703935+0000","last_fullsized":"2026-03-08T23:03:37.703935+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:43:55.830970+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704357+0000","last_change":"2026-03-08T23:03:07.146617+0000","last_active":"2026-03-08T23:03:37.704357+0000","last_peered":"2026-03-08T23:03:37.704357+0000","last_clean":"2026-03-08T23:03:37.704357+0000","last_became_active":"2026-03-08T23:03:07.146465+0000","last_became_peered":"2026-03-08T23:03:07.146465+0000","last_unstale":"2026-03-08T23:03:37.704357+0000","last_undegraded":"2026-03-08T23:03:37.704357+0000","last_fullsized":"2026-03-08T23:03:37.704357+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:36:47.927560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706574+0000","last_change":"2026-03-08T23:03:05.266532+0000","last_active":"2026-03-08T23:03:37.706574+0000","last_peered":"2026-03-08T23:03:37.706574+0000","last_clean":"2026-03-08T23:03:37.706574+0000","last_became_active":"2026-03-08T23:03:05.266451+0000","last_became_peered":"2026-03-08T23:03:05.266451+0000","last_unstale":"2026-03-08T23:03:37.706574+0000","last_undegraded":"2026-03-08T23:03:37.706574+0000","last_fullsized":"2026-03-08T23:03:37.706574+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:11:04.688704+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"63'11","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:15.396413+0000","last_change":"2026-03-08T23:03:09.268911+0000","last_active":"2026-03-08T23:04:15.396413+0000","last_peered":"2026-03-08T23:04:15.396413+0000","last_clean":"2026-03-08T23:04:15.396413+0000","last_became_active":"2026-03-08T23:03:09.268820+0000","last_became_peered":"2026-03-08T23:03:09.268820+0000","last_unstale":"2026-03-08T23:04:15.396413+0000","last_undegraded":"2026-03-08T23:04:15.396413+0000","last_fullsized":"2026-03-08T23:04:15.396413+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:13:46.772858+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704563+0000","last_change":"2026-03-08T23:03:11.309735+0000","last_active":"2026-03-08T23:03:37.704563+0000","last_peered":"2026-03-08T23:03:37.704563+0000","last_clean":"2026-03-08T23:03:37.704563+0000","last_became_active":"2026-03-08T23:03:11.309608+0000","last_became_peered":"2026-03-08T23:03:11.309608+0000","last_unstale":"2026-03-08T23:03:37.704563+0000","last_undegraded":"2026-03-08T23:03:37.704563+0000","last_fullsized":"2026-03-08T23:03:37.704563+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:13:35.136750+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706064+0000","last_change":"2026-03-08T23:03:07.142109+0000","last_active":"2026-03-08T23:03:37.706064+0000","last_peered":"2026-03-08T23:03:37.706064+0000","last_clean":"2026-03-08T23:03:37.706064+0000","last_became_active":"2026-03-08T23:03:07.141998+0000","last_became_peered":"2026-03-08T23:03:07.141998+0000","last_unstale":"2026-03-08T23:03:37.706064+0000","last_undegraded":"2026-03-08T23:03:37.706064+0000","last_fullsized":"2026-03-08T23:03:37.706064+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:33:03.676215+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705997+0000","last_change":"2026-03-08T23:03:05.302921+0000","last_active":"2026-03-08T23:03:37.705997+0000","last_peered":"2026-03-08T23:03:37.705997+0000","last_clean":"2026-03-08T23:03:37.705997+0000","last_became_active":"2026-03-08T23:03:05.302815+0000","last_became_peered":"2026-03-08T23:03:05.302815+0000","last_unstale":"2026-03-08T23:03:37.705997+0000","last_undegraded":"2026-03-08T23:03:37.705997+0000","last_fullsized":"2026-03-08T23:03:37.705997+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:34:48.105325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603431+0000","last_change":"2026-03-08T23:03:09.275586+0000","last_active":"2026-03-08T23:03:37.603431+0000","last_peered":"2026-03-08T23:03:37.603431+0000","last_clean":"2026-03-08T23:03:37.603431+0000","last_became_active":"2026-03-08T23:03:09.274433+0000","last_became_peered":"2026-03-08T23:03:09.274433+0000","last_unstale":"2026-03-08T23:03:37.603431+0000","last_undegraded":"2026-03-08T23:03:37.603431+0000","last_fullsized":"2026-03-08T23:03:37.603431+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:29:52.888855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598555+0000","last_change":"2026-03-08T23:03:11.568225+0000","last_active":"2026-03-08T23:03:37.598555+0000","last_peered":"2026-03-08T23:03:37.598555+0000","last_clean":"2026-03-08T23:03:37.598555+0000","last_became_active":"2026-03-08T23:03:11.567909+0000","last_became_peered":"2026-03-08T23:03:11.567909+0000","last_unstale":"2026-03-08T23:03:37.598555+0000","last_undegraded":"2026-03-08T23:03:37.598555+0000","last_fullsized":"2026-03-08T23:03:37.598555+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:48:59.439216+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703318+0000","last_change":"2026-03-08T23:03:05.267034+0000","last_active":"2026-03-08T23:03:37.703318+0000","last_peered":"2026-03-08T23:03:37.703318+0000","last_clean":"2026-03-08T23:03:37.703318+0000","last_became_active":"2026-03-08T23:03:05.266677+0000","last_became_peered":"2026-03-08T23:03:05.266677+0000","last_unstale":"2026-03-08T23:03:37.703318+0000","last_undegraded":"2026-03-08T23:03:37.703318+0000","last_fullsized":"2026-03-08T23:03:37.703318+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:26:45.992415+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"62'5","reported_seq":39,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605260+0000","last_change":"2026-03-08T23:03:07.149451+0000","last_active":"2026-03-08T23:03:37.605260+0000","last_peered":"2026-03-08T23:03:37.605260+0000","last_clean":"2026-03-08T23:03:37.605260+0000","last_became_active":"2026-03-08T23:03:07.149292+0000","last_became_peered":"2026-03-08T23:03:07.149292+0000","last_unstale":"2026-03-08T23:03:37.605260+0000","last_undegraded":"2026-03-08T23:03:37.605260+0000","last_fullsized":"2026-03-08T23:03:37.605260+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:51:59.467893+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703447+0000","last_change":"2026-03-08T23:03:09.259503+0000","last_active":"2026-03-08T23:03:37.703447+0000","last_peered":"2026-03-08T23:03:37.703447+0000","last_clean":"2026-03-08T23:03:37.703447+0000","last_became_active":"2026-03-08T23:03:09.259370+0000","last_became_peered":"2026-03-08T23:03:09.259370+0000","last_unstale":"2026-03-08T23:03:37.703447+0000","last_undegraded":"2026-03-08T23:03:37.703447+0000","last_fullsized":"2026-03-08T23:03:37.703447+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:11:01.394256+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706072+0000","last_change":"2026-03-08T23:03:11.572471+0000","last_active":"2026-03-08T23:03:37.706072+0000","last_peered":"2026-03-08T23:03:37.706072+0000","last_clean":"2026-03-08T23:03:37.706072+0000","last_became_active":"2026-03-08T23:03:11.572080+0000","last_became_peered":"2026-03-08T23:03:11.572080+0000","last_unstale":"2026-03-08T23:03:37.706072+0000","last_undegraded":"2026-03-08T23:03:37.706072+0000","last_fullsized":"2026-03-08T23:03:37.706072+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:49:11.195104+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706124+0000","last_change":"2026-03-08T23:03:05.259076+0000","last_active":"2026-03-08T23:03:37.706124+0000","last_peered":"2026-03-08T23:03:37.706124+0000","last_clean":"2026-03-08T23:03:37.706124+0000","last_became_active":"2026-03-08T23:03:05.258954+0000","last_became_peered":"2026-03-08T23:03:05.258954+0000","last_unstale":"2026-03-08T23:03:37.706124+0000","last_undegraded":"2026-03-08T23:03:37.706124+0000","last_fullsized":"2026-03-08T23:03:37.706124+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:39:00.801184+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599249+0000","last_change":"2026-03-08T23:03:07.147186+0000","last_active":"2026-03-08T23:03:37.599249+0000","last_peered":"2026-03-08T23:03:37.599249+0000","last_clean":"2026-03-08T23:03:37.599249+0000","last_became_active":"2026-03-08T23:03:07.146718+0000","last_became_peered":"2026-03-08T23:03:07.146718+0000","last_unstale":"2026-03-08T23:03:37.599249+0000","last_undegraded":"2026-03-08T23:03:37.599249+0000","last_fullsized":"2026-03-08T23:03:37.599249+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:46:42.426757+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599238+0000","last_change":"2026-03-08T23:03:09.262136+0000","last_active":"2026-03-08T23:03:37.599238+0000","last_peered":"2026-03-08T23:03:37.599238+0000","last_clean":"2026-03-08T23:03:37.599238+0000","last_became_active":"2026-03-08T23:03:09.262055+0000","last_became_peered":"2026-03-08T23:03:09.262055+0000","last_unstale":"2026-03-08T23:03:37.599238+0000","last_undegraded":"2026-03-08T23:03:37.599238+0000","last_fullsized":"2026-03-08T23:03:37.599238+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:33:25.062434+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":68,"num_read_kb":63,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":51,"seq":219043332118,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1146880,"data_stored":712953,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1584,"internal_metadata":27458000},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561053,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27928,"kb_used_data":1096,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939496,"statfs":{"total":21470642176,"available":21442043904,"internally_reserved":0,"allocated":1122304,"data_stored":712604,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":38,"seq":163208757284,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":644,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":659456,"data_stored":253713,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":32,"seq":137438953516,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27516,"kb_used_data":680,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939908,"statfs":{"total":21470642176,"available":21442465792,"internally_reserved":0,"allocated":696320,"data_stored":253699,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149746,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":663552,"data_stored":254147,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1587,"internal_metadata":27457997},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411385,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":252811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574912,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":667648,"data_stored":252649,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738438,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27944,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939480,"statfs":{"total":21470642176,"available":21442027520,"internally_reserved":0,"allocated":1134592,"data_stored":712689,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-08T23:04:28.873 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph pg dump --format=json 2026-03-08T23:04:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:29 vm11 bash[23232]: audit 2026-03-08T23:04:28.805461+0000 mgr.y (mgr.24419) 64 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:29 vm11 bash[23232]: audit 2026-03-08T23:04:28.805461+0000 mgr.y (mgr.24419) 64 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:29.584 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:29 vm06 bash[20625]: audit 2026-03-08T23:04:28.805461+0000 mgr.y (mgr.24419) 64 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:29.584 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:29 vm06 bash[20625]: audit 2026-03-08T23:04:28.805461+0000 mgr.y (mgr.24419) 64 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:29.584 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:29 vm06 bash[27746]: audit 2026-03-08T23:04:28.805461+0000 mgr.y (mgr.24419) 64 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:29.584 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:29 vm06 bash[27746]: audit 2026-03-08T23:04:28.805461+0000 mgr.y (mgr.24419) 64 : audit [DBG] from='client.14655 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:30.762 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:30 vm06 bash[20625]: cluster 2026-03-08T23:04:29.633659+0000 mgr.y (mgr.24419) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-08T23:04:30.762 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:30 vm06 bash[20625]: cluster 2026-03-08T23:04:29.633659+0000 mgr.y (mgr.24419) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-08T23:04:30.762 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:04:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:04:30.762 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:30 vm06 bash[27746]: cluster 2026-03-08T23:04:29.633659+0000 mgr.y (mgr.24419) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-08T23:04:30.762 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:30 vm06 bash[27746]: cluster 2026-03-08T23:04:29.633659+0000 mgr.y (mgr.24419) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-08T23:04:30.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:30 vm11 bash[23232]: cluster 2026-03-08T23:04:29.633659+0000 mgr.y (mgr.24419) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-08T23:04:30.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:30 vm11 bash[23232]: cluster 2026-03-08T23:04:29.633659+0000 mgr.y (mgr.24419) 65 : cluster [DBG] pgmap v28: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 682 B/s rd, 0 op/s 2026-03-08T23:04:32.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:04:32.599 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:33.216 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:04:33.218 INFO:teuthology.orchestra.run.vm06.stderr:dumped all 2026-03-08T23:04:33.228 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:32 vm06 bash[20625]: cluster 2026-03-08T23:04:31.634153+0000 mgr.y (mgr.24419) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-08T23:04:33.228 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:32 vm06 bash[20625]: cluster 2026-03-08T23:04:31.634153+0000 mgr.y (mgr.24419) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-08T23:04:33.228 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:32 vm06 bash[20625]: audit 2026-03-08T23:04:32.074757+0000 mgr.y (mgr.24419) 67 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:33.228 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:32 vm06 bash[20625]: audit 2026-03-08T23:04:32.074757+0000 mgr.y (mgr.24419) 67 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:33.229 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:32 vm06 bash[27746]: cluster 2026-03-08T23:04:31.634153+0000 mgr.y (mgr.24419) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-08T23:04:33.229 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:32 vm06 bash[27746]: cluster 2026-03-08T23:04:31.634153+0000 mgr.y (mgr.24419) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-08T23:04:33.229 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:32 vm06 bash[27746]: audit 2026-03-08T23:04:32.074757+0000 mgr.y (mgr.24419) 67 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:33.229 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:32 vm06 bash[27746]: audit 2026-03-08T23:04:32.074757+0000 mgr.y (mgr.24419) 67 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:32 vm11 bash[23232]: cluster 2026-03-08T23:04:31.634153+0000 mgr.y (mgr.24419) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-08T23:04:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:32 vm11 bash[23232]: cluster 2026-03-08T23:04:31.634153+0000 mgr.y (mgr.24419) 66 : cluster [DBG] pgmap v29: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.1 KiB/s rd, 1 op/s 2026-03-08T23:04:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:32 vm11 bash[23232]: audit 2026-03-08T23:04:32.074757+0000 mgr.y (mgr.24419) 67 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:32 vm11 bash[23232]: audit 2026-03-08T23:04:32.074757+0000 mgr.y (mgr.24419) 67 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:33.566 INFO:teuthology.orchestra.run.vm06.stdout:{"pg_ready":true,"pg_map":{"version":29,"stamp":"2026-03-08T23:04:31.633793+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":465419,"num_objects":199,"num_object_clones":0,"num_object_copies":597,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":199,"num_whiteouts":0,"num_read":917,"num_read_kb":776,"num_write":505,"num_write_kb":629,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":538,"ondisk_log_size":538,"up":396,"acting":396,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":396,"num_osds":8,"num_per_pool_osds":8,"num_per_pool_omap_osds":8,"kb":167739392,"kb_used":221280,"kb_used_data":6588,"kb_used_omap":12,"kb_used_meta":214515,"kb_avail":167518112,"statfs":{"total":171765137408,"available":171538546688,"internally_reserved":0,"allocated":6746112,"data_stored":3405265,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":12710,"internal_metadata":219663962},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":13,"num_read_kb":13,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.002063"},"pg_stats":[{"pgid":"6.1b","version":"62'1","reported_seq":22,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706258+0000","last_change":"2026-03-08T23:03:11.572368+0000","last_active":"2026-03-08T23:03:37.706258+0000","last_peered":"2026-03-08T23:03:37.706258+0000","last_clean":"2026-03-08T23:03:37.706258+0000","last_became_active":"2026-03-08T23:03:11.571527+0000","last_became_peered":"2026-03-08T23:03:11.571527+0000","last_unstale":"2026-03-08T23:03:37.706258+0000","last_undegraded":"2026-03-08T23:03:37.706258+0000","last_fullsized":"2026-03-08T23:03:37.706258+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:47:09.213034+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":13,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1f","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603107+0000","last_change":"2026-03-08T23:03:05.265445+0000","last_active":"2026-03-08T23:03:37.603107+0000","last_peered":"2026-03-08T23:03:37.603107+0000","last_clean":"2026-03-08T23:03:37.603107+0000","last_became_active":"2026-03-08T23:03:05.265160+0000","last_became_peered":"2026-03-08T23:03:05.265160+0000","last_unstale":"2026-03-08T23:03:37.603107+0000","last_undegraded":"2026-03-08T23:03:37.603107+0000","last_fullsized":"2026-03-08T23:03:37.603107+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:09:55.505460+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,4],"acting":[0,7,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.1e","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706402+0000","last_change":"2026-03-08T23:03:07.144309+0000","last_active":"2026-03-08T23:03:37.706402+0000","last_peered":"2026-03-08T23:03:37.706402+0000","last_clean":"2026-03-08T23:03:37.706402+0000","last_became_active":"2026-03-08T23:03:07.144171+0000","last_became_peered":"2026-03-08T23:03:07.144171+0000","last_unstale":"2026-03-08T23:03:37.706402+0000","last_undegraded":"2026-03-08T23:03:37.706402+0000","last_fullsized":"2026-03-08T23:03:37.706402+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:42:56.896174+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,2],"acting":[3,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.18","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598958+0000","last_change":"2026-03-08T23:03:09.260951+0000","last_active":"2026-03-08T23:03:37.598958+0000","last_peered":"2026-03-08T23:03:37.598958+0000","last_clean":"2026-03-08T23:03:37.598958+0000","last_became_active":"2026-03-08T23:03:09.260581+0000","last_became_peered":"2026-03-08T23:03:09.260581+0000","last_unstale":"2026-03-08T23:03:37.598958+0000","last_undegraded":"2026-03-08T23:03:37.598958+0000","last_fullsized":"2026-03-08T23:03:37.598958+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:56:20.389361+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,1],"acting":[4,6,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705919+0000","last_change":"2026-03-08T23:03:05.302876+0000","last_active":"2026-03-08T23:03:37.705919+0000","last_peered":"2026-03-08T23:03:37.705919+0000","last_clean":"2026-03-08T23:03:37.705919+0000","last_became_active":"2026-03-08T23:03:05.302712+0000","last_became_peered":"2026-03-08T23:03:05.302712+0000","last_unstale":"2026-03-08T23:03:37.705919+0000","last_undegraded":"2026-03-08T23:03:37.705919+0000","last_fullsized":"2026-03-08T23:03:37.705919+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:15:51.288124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1f","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603543+0000","last_change":"2026-03-08T23:03:07.143004+0000","last_active":"2026-03-08T23:03:37.603543+0000","last_peered":"2026-03-08T23:03:37.603543+0000","last_clean":"2026-03-08T23:03:37.603543+0000","last_became_active":"2026-03-08T23:03:07.142907+0000","last_became_peered":"2026-03-08T23:03:07.142907+0000","last_unstale":"2026-03-08T23:03:37.603543+0000","last_undegraded":"2026-03-08T23:03:37.603543+0000","last_fullsized":"2026-03-08T23:03:37.603543+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:41:59.389599+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,2],"acting":[0,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.19","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703501+0000","last_change":"2026-03-08T23:03:09.273772+0000","last_active":"2026-03-08T23:03:37.703501+0000","last_peered":"2026-03-08T23:03:37.703501+0000","last_clean":"2026-03-08T23:03:37.703501+0000","last_became_active":"2026-03-08T23:03:09.272256+0000","last_became_peered":"2026-03-08T23:03:09.272256+0000","last_unstale":"2026-03-08T23:03:37.703501+0000","last_undegraded":"2026-03-08T23:03:37.703501+0000","last_fullsized":"2026-03-08T23:03:37.703501+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:09:39.991343+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,7],"acting":[1,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598200+0000","last_change":"2026-03-08T23:03:11.309877+0000","last_active":"2026-03-08T23:03:37.598200+0000","last_peered":"2026-03-08T23:03:37.598200+0000","last_clean":"2026-03-08T23:03:37.598200+0000","last_became_active":"2026-03-08T23:03:11.309035+0000","last_became_peered":"2026-03-08T23:03:11.309035+0000","last_unstale":"2026-03-08T23:03:37.598200+0000","last_undegraded":"2026-03-08T23:03:37.598200+0000","last_fullsized":"2026-03-08T23:03:37.598200+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:44:22.742378+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,1],"acting":[4,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704268+0000","last_change":"2026-03-08T23:03:05.352635+0000","last_active":"2026-03-08T23:03:37.704268+0000","last_peered":"2026-03-08T23:03:37.704268+0000","last_clean":"2026-03-08T23:03:37.704268+0000","last_became_active":"2026-03-08T23:03:05.352185+0000","last_became_peered":"2026-03-08T23:03:05.352185+0000","last_unstale":"2026-03-08T23:03:37.704268+0000","last_undegraded":"2026-03-08T23:03:37.704268+0000","last_fullsized":"2026-03-08T23:03:37.704268+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:59:43.776557+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,0],"acting":[7,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1c","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706739+0000","last_change":"2026-03-08T23:03:07.159152+0000","last_active":"2026-03-08T23:03:37.706739+0000","last_peered":"2026-03-08T23:03:37.706739+0000","last_clean":"2026-03-08T23:03:37.706739+0000","last_became_active":"2026-03-08T23:03:07.159019+0000","last_became_peered":"2026-03-08T23:03:07.159019+0000","last_unstale":"2026-03-08T23:03:37.706739+0000","last_undegraded":"2026-03-08T23:03:37.706739+0000","last_fullsized":"2026-03-08T23:03:37.706739+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:27:11.074114+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,1],"acting":[5,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704296+0000","last_change":"2026-03-08T23:03:09.272176+0000","last_active":"2026-03-08T23:03:37.704296+0000","last_peered":"2026-03-08T23:03:37.704296+0000","last_clean":"2026-03-08T23:03:37.704296+0000","last_became_active":"2026-03-08T23:03:09.272051+0000","last_became_peered":"2026-03-08T23:03:09.272051+0000","last_unstale":"2026-03-08T23:03:37.704296+0000","last_undegraded":"2026-03-08T23:03:37.704296+0000","last_fullsized":"2026-03-08T23:03:37.704296+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:36:40.231124+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.19","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706705+0000","last_change":"2026-03-08T23:03:11.330324+0000","last_active":"2026-03-08T23:03:37.706705+0000","last_peered":"2026-03-08T23:03:37.706705+0000","last_clean":"2026-03-08T23:03:37.706705+0000","last_became_active":"2026-03-08T23:03:11.330132+0000","last_became_peered":"2026-03-08T23:03:11.330132+0000","last_unstale":"2026-03-08T23:03:37.706705+0000","last_undegraded":"2026-03-08T23:03:37.706705+0000","last_fullsized":"2026-03-08T23:03:37.706705+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:45:48.893007+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,3],"acting":[5,1,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704489+0000","last_change":"2026-03-08T23:03:05.306509+0000","last_active":"2026-03-08T23:03:37.704489+0000","last_peered":"2026-03-08T23:03:37.704489+0000","last_clean":"2026-03-08T23:03:37.704489+0000","last_became_active":"2026-03-08T23:03:05.306317+0000","last_became_peered":"2026-03-08T23:03:05.306317+0000","last_unstale":"2026-03-08T23:03:37.704489+0000","last_undegraded":"2026-03-08T23:03:37.704489+0000","last_fullsized":"2026-03-08T23:03:37.704489+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:01:08.158439+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1d","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706045+0000","last_change":"2026-03-08T23:03:07.149272+0000","last_active":"2026-03-08T23:03:37.706045+0000","last_peered":"2026-03-08T23:03:37.706045+0000","last_clean":"2026-03-08T23:03:37.706045+0000","last_became_active":"2026-03-08T23:03:07.149170+0000","last_became_peered":"2026-03-08T23:03:07.149170+0000","last_unstale":"2026-03-08T23:03:37.706045+0000","last_undegraded":"2026-03-08T23:03:37.706045+0000","last_fullsized":"2026-03-08T23:03:37.706045+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:59:14.814510+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706059+0000","last_change":"2026-03-08T23:03:09.270502+0000","last_active":"2026-03-08T23:03:37.706059+0000","last_peered":"2026-03-08T23:03:37.706059+0000","last_clean":"2026-03-08T23:03:37.706059+0000","last_became_active":"2026-03-08T23:03:09.270267+0000","last_became_peered":"2026-03-08T23:03:09.270267+0000","last_unstale":"2026-03-08T23:03:37.706059+0000","last_undegraded":"2026-03-08T23:03:37.706059+0000","last_fullsized":"2026-03-08T23:03:37.706059+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:18:55.781881+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,0,7],"acting":[5,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.18","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602400+0000","last_change":"2026-03-08T23:03:11.306586+0000","last_active":"2026-03-08T23:03:37.602400+0000","last_peered":"2026-03-08T23:03:37.602400+0000","last_clean":"2026-03-08T23:03:37.602400+0000","last_became_active":"2026-03-08T23:03:11.305881+0000","last_became_peered":"2026-03-08T23:03:11.305881+0000","last_unstale":"2026-03-08T23:03:37.602400+0000","last_undegraded":"2026-03-08T23:03:37.602400+0000","last_fullsized":"2026-03-08T23:03:37.602400+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:05:49.685712+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,7],"acting":[0,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.a","version":"63'19","reported_seq":60,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703419+0000","last_change":"2026-03-08T23:03:07.147417+0000","last_active":"2026-03-08T23:03:37.703419+0000","last_peered":"2026-03-08T23:03:37.703419+0000","last_clean":"2026-03-08T23:03:37.703419+0000","last_became_active":"2026-03-08T23:03:07.146002+0000","last_became_peered":"2026-03-08T23:03:07.146002+0000","last_unstale":"2026-03-08T23:03:37.703419+0000","last_undegraded":"2026-03-08T23:03:37.703419+0000","last_fullsized":"2026-03-08T23:03:37.703419+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:08:52.792912+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":9,"num_object_clones":0,"num_object_copies":27,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":9,"num_whiteouts":0,"num_read":32,"num_read_kb":21,"num_write":20,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.b","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704386+0000","last_change":"2026-03-08T23:03:05.306556+0000","last_active":"2026-03-08T23:03:37.704386+0000","last_peered":"2026-03-08T23:03:37.704386+0000","last_clean":"2026-03-08T23:03:37.704386+0000","last_became_active":"2026-03-08T23:03:05.306189+0000","last_became_peered":"2026-03-08T23:03:05.306189+0000","last_unstale":"2026-03-08T23:03:37.704386+0000","last_undegraded":"2026-03-08T23:03:37.704386+0000","last_fullsized":"2026-03-08T23:03:37.704386+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:18:44.920517+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,5],"acting":[7,4,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704512+0000","last_change":"2026-03-08T23:03:09.259574+0000","last_active":"2026-03-08T23:03:37.704512+0000","last_peered":"2026-03-08T23:03:37.704512+0000","last_clean":"2026-03-08T23:03:37.704512+0000","last_became_active":"2026-03-08T23:03:09.259361+0000","last_became_peered":"2026-03-08T23:03:09.259361+0000","last_unstale":"2026-03-08T23:03:37.704512+0000","last_undegraded":"2026-03-08T23:03:37.704512+0000","last_fullsized":"2026-03-08T23:03:37.704512+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:33:22.882858+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604954+0000","last_change":"2026-03-08T23:03:11.323886+0000","last_active":"2026-03-08T23:03:37.604954+0000","last_peered":"2026-03-08T23:03:37.604954+0000","last_clean":"2026-03-08T23:03:37.604954+0000","last_became_active":"2026-03-08T23:03:11.323772+0000","last_became_peered":"2026-03-08T23:03:11.323772+0000","last_unstale":"2026-03-08T23:03:37.604954+0000","last_undegraded":"2026-03-08T23:03:37.604954+0000","last_fullsized":"2026-03-08T23:03:37.604954+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:05:48.283847+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,4],"acting":[2,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.b","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705564+0000","last_change":"2026-03-08T23:03:07.147742+0000","last_active":"2026-03-08T23:03:37.705564+0000","last_peered":"2026-03-08T23:03:37.705564+0000","last_clean":"2026-03-08T23:03:37.705564+0000","last_became_active":"2026-03-08T23:03:07.145871+0000","last_became_peered":"2026-03-08T23:03:07.145871+0000","last_unstale":"2026-03-08T23:03:37.705564+0000","last_undegraded":"2026-03-08T23:03:37.705564+0000","last_fullsized":"2026-03-08T23:03:37.705564+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:38:50.191279+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,4],"acting":[3,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703833+0000","last_change":"2026-03-08T23:03:05.258141+0000","last_active":"2026-03-08T23:03:37.703833+0000","last_peered":"2026-03-08T23:03:37.703833+0000","last_clean":"2026-03-08T23:03:37.703833+0000","last_became_active":"2026-03-08T23:03:05.257993+0000","last_became_peered":"2026-03-08T23:03:05.257993+0000","last_unstale":"2026-03-08T23:03:37.703833+0000","last_undegraded":"2026-03-08T23:03:37.703833+0000","last_fullsized":"2026-03-08T23:03:37.703833+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:22:36.458095+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,7],"acting":[1,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.d","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.397259+0000","last_change":"2026-03-08T23:03:09.269523+0000","last_active":"2026-03-08T23:04:20.397259+0000","last_peered":"2026-03-08T23:04:20.397259+0000","last_clean":"2026-03-08T23:04:20.397259+0000","last_became_active":"2026-03-08T23:03:09.269433+0000","last_became_peered":"2026-03-08T23:03:09.269433+0000","last_unstale":"2026-03-08T23:04:20.397259+0000","last_undegraded":"2026-03-08T23:04:20.397259+0000","last_fullsized":"2026-03-08T23:04:20.397259+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:55:52.374613+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,7,5],"acting":[2,7,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599489+0000","last_change":"2026-03-08T23:03:11.301601+0000","last_active":"2026-03-08T23:03:37.599489+0000","last_peered":"2026-03-08T23:03:37.599489+0000","last_clean":"2026-03-08T23:03:37.599489+0000","last_became_active":"2026-03-08T23:03:11.301037+0000","last_became_peered":"2026-03-08T23:03:11.301037+0000","last_unstale":"2026-03-08T23:03:37.599489+0000","last_undegraded":"2026-03-08T23:03:37.599489+0000","last_fullsized":"2026-03-08T23:03:37.599489+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:10:40.964320+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.8","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705622+0000","last_change":"2026-03-08T23:03:07.151750+0000","last_active":"2026-03-08T23:03:37.705622+0000","last_peered":"2026-03-08T23:03:37.705622+0000","last_clean":"2026-03-08T23:03:37.705622+0000","last_became_active":"2026-03-08T23:03:07.151662+0000","last_became_peered":"2026-03-08T23:03:07.151662+0000","last_unstale":"2026-03-08T23:03:37.705622+0000","last_undegraded":"2026-03-08T23:03:37.705622+0000","last_fullsized":"2026-03-08T23:03:37.705622+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:46:06.961838+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.9","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703884+0000","last_change":"2026-03-08T23:03:05.258074+0000","last_active":"2026-03-08T23:03:37.703884+0000","last_peered":"2026-03-08T23:03:37.703884+0000","last_clean":"2026-03-08T23:03:37.703884+0000","last_became_active":"2026-03-08T23:03:05.257838+0000","last_became_peered":"2026-03-08T23:03:05.257838+0000","last_unstale":"2026-03-08T23:03:37.703884+0000","last_undegraded":"2026-03-08T23:03:37.703884+0000","last_fullsized":"2026-03-08T23:03:37.703884+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:56:48.299746+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,7,3],"acting":[1,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.e","version":"63'11","reported_seq":55,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.396630+0000","last_change":"2026-03-08T23:03:09.259469+0000","last_active":"2026-03-08T23:04:20.396630+0000","last_peered":"2026-03-08T23:04:20.396630+0000","last_clean":"2026-03-08T23:04:20.396630+0000","last_became_active":"2026-03-08T23:03:09.259374+0000","last_became_peered":"2026-03-08T23:03:09.259374+0000","last_unstale":"2026-03-08T23:04:20.396630+0000","last_undegraded":"2026-03-08T23:04:20.396630+0000","last_fullsized":"2026-03-08T23:04:20.396630+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:11:44.683631+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,0],"acting":[4,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705160+0000","last_change":"2026-03-08T23:03:11.313287+0000","last_active":"2026-03-08T23:03:37.705160+0000","last_peered":"2026-03-08T23:03:37.705160+0000","last_clean":"2026-03-08T23:03:37.705160+0000","last_became_active":"2026-03-08T23:03:11.313145+0000","last_became_peered":"2026-03-08T23:03:11.313145+0000","last_unstale":"2026-03-08T23:03:37.705160+0000","last_undegraded":"2026-03-08T23:03:37.705160+0000","last_fullsized":"2026-03-08T23:03:37.705160+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:33:51.990289+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.9","version":"62'12","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598384+0000","last_change":"2026-03-08T23:03:07.157761+0000","last_active":"2026-03-08T23:03:37.598384+0000","last_peered":"2026-03-08T23:03:37.598384+0000","last_clean":"2026-03-08T23:03:37.598384+0000","last_became_active":"2026-03-08T23:03:07.157647+0000","last_became_peered":"2026-03-08T23:03:07.157647+0000","last_unstale":"2026-03-08T23:03:37.598384+0000","last_undegraded":"2026-03-08T23:03:37.598384+0000","last_fullsized":"2026-03-08T23:03:37.598384+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:07:03.975955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":25,"num_read_kb":16,"num_write":14,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,7],"acting":[4,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.8","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704340+0000","last_change":"2026-03-08T23:03:05.351769+0000","last_active":"2026-03-08T23:03:37.704340+0000","last_peered":"2026-03-08T23:03:37.704340+0000","last_clean":"2026-03-08T23:03:37.704340+0000","last_became_active":"2026-03-08T23:03:05.351636+0000","last_became_peered":"2026-03-08T23:03:05.351636+0000","last_unstale":"2026-03-08T23:03:37.704340+0000","last_undegraded":"2026-03-08T23:03:37.704340+0000","last_fullsized":"2026-03-08T23:03:37.704340+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:34:10.951093+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,1],"acting":[7,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.f","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.707037+0000","last_change":"2026-03-08T23:03:09.260254+0000","last_active":"2026-03-08T23:03:37.707037+0000","last_peered":"2026-03-08T23:03:37.707037+0000","last_clean":"2026-03-08T23:03:37.707037+0000","last_became_active":"2026-03-08T23:03:09.260175+0000","last_became_peered":"2026-03-08T23:03:09.260175+0000","last_unstale":"2026-03-08T23:03:37.707037+0000","last_undegraded":"2026-03-08T23:03:37.707037+0000","last_fullsized":"2026-03-08T23:03:37.707037+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:32:30.300842+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,6],"acting":[5,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.c","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705421+0000","last_change":"2026-03-08T23:03:11.571776+0000","last_active":"2026-03-08T23:03:37.705421+0000","last_peered":"2026-03-08T23:03:37.705421+0000","last_clean":"2026-03-08T23:03:37.705421+0000","last_became_active":"2026-03-08T23:03:11.570841+0000","last_became_peered":"2026-03-08T23:03:11.570841+0000","last_unstale":"2026-03-08T23:03:37.705421+0000","last_undegraded":"2026-03-08T23:03:37.705421+0000","last_fullsized":"2026-03-08T23:03:37.705421+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:09:44.411491+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.6","version":"62'12","reported_seq":47,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605162+0000","last_change":"2026-03-08T23:03:07.149537+0000","last_active":"2026-03-08T23:03:37.605162+0000","last_peered":"2026-03-08T23:03:37.605162+0000","last_clean":"2026-03-08T23:03:37.605162+0000","last_became_active":"2026-03-08T23:03:07.149449+0000","last_became_peered":"2026-03-08T23:03:07.149449+0000","last_unstale":"2026-03-08T23:03:37.605162+0000","last_undegraded":"2026-03-08T23:03:37.605162+0000","last_fullsized":"2026-03-08T23:03:37.605162+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":12,"log_dups_size":0,"ondisk_log_size":12,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:23:50.732022+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":18,"num_read_kb":12,"num_write":12,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703679+0000","last_change":"2026-03-08T23:03:05.266738+0000","last_active":"2026-03-08T23:03:37.703679+0000","last_peered":"2026-03-08T23:03:37.703679+0000","last_clean":"2026-03-08T23:03:37.703679+0000","last_became_active":"2026-03-08T23:03:05.266518+0000","last_became_peered":"2026-03-08T23:03:05.266518+0000","last_unstale":"2026-03-08T23:03:37.703679+0000","last_undegraded":"2026-03-08T23:03:37.703679+0000","last_fullsized":"2026-03-08T23:03:37.703679+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:49:08.039597+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,7,2],"acting":[6,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"4.1","version":"62'1","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599460+0000","last_change":"2026-03-08T23:03:14.653787+0000","last_active":"2026-03-08T23:03:37.599460+0000","last_peered":"2026-03-08T23:03:37.599460+0000","last_clean":"2026-03-08T23:03:37.599460+0000","last_became_active":"2026-03-08T23:03:08.139918+0000","last_became_peered":"2026-03-08T23:03:08.139918+0000","last_unstale":"2026-03-08T23:03:37.599460+0000","last_undegraded":"2026-03-08T23:03:37.599460+0000","last_fullsized":"2026-03-08T23:03:37.599460+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_clean_scrub_stamp":"2026-03-08T23:03:07.117572+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:10:50.001503+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00031847599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,5,6],"acting":[4,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.0","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.397200+0000","last_change":"2026-03-08T23:03:09.269944+0000","last_active":"2026-03-08T23:04:20.397200+0000","last_peered":"2026-03-08T23:04:20.397200+0000","last_clean":"2026-03-08T23:04:20.397200+0000","last_became_active":"2026-03-08T23:03:09.269840+0000","last_became_peered":"2026-03-08T23:03:09.269840+0000","last_unstale":"2026-03-08T23:04:20.397200+0000","last_undegraded":"2026-03-08T23:04:20.397200+0000","last_fullsized":"2026-03-08T23:04:20.397200+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:09:18.229864+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,4],"acting":[3,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.3","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703784+0000","last_change":"2026-03-08T23:03:11.572888+0000","last_active":"2026-03-08T23:03:37.703784+0000","last_peered":"2026-03-08T23:03:37.703784+0000","last_clean":"2026-03-08T23:03:37.703784+0000","last_became_active":"2026-03-08T23:03:11.572686+0000","last_became_peered":"2026-03-08T23:03:11.572686+0000","last_unstale":"2026-03-08T23:03:37.703784+0000","last_undegraded":"2026-03-08T23:03:37.703784+0000","last_fullsized":"2026-03-08T23:03:37.703784+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:29:36.565705+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,2],"acting":[7,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.7","version":"62'13","reported_seq":56,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705697+0000","last_change":"2026-03-08T23:03:07.151778+0000","last_active":"2026-03-08T23:03:37.705697+0000","last_peered":"2026-03-08T23:03:37.705697+0000","last_clean":"2026-03-08T23:03:37.705697+0000","last_became_active":"2026-03-08T23:03:07.151698+0000","last_became_peered":"2026-03-08T23:03:07.151698+0000","last_unstale":"2026-03-08T23:03:37.705697+0000","last_undegraded":"2026-03-08T23:03:37.705697+0000","last_fullsized":"2026-03-08T23:03:37.705697+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":13,"log_dups_size":0,"ondisk_log_size":13,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:40:24.616411+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":30,"num_read_kb":19,"num_write":16,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703940+0000","last_change":"2026-03-08T23:03:05.259562+0000","last_active":"2026-03-08T23:03:37.703940+0000","last_peered":"2026-03-08T23:03:37.703940+0000","last_clean":"2026-03-08T23:03:37.703940+0000","last_became_active":"2026-03-08T23:03:05.259457+0000","last_became_peered":"2026-03-08T23:03:05.259457+0000","last_unstale":"2026-03-08T23:03:37.703940+0000","last_undegraded":"2026-03-08T23:03:37.703940+0000","last_fullsized":"2026-03-08T23:03:37.703940+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:14:47.267473+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,4],"acting":[1,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.0","version":"65'5","reported_seq":110,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:28.149866+0000","last_change":"2026-03-08T23:03:14.975893+0000","last_active":"2026-03-08T23:04:28.149866+0000","last_peered":"2026-03-08T23:04:28.149866+0000","last_clean":"2026-03-08T23:04:28.149866+0000","last_became_active":"2026-03-08T23:03:08.143743+0000","last_became_peered":"2026-03-08T23:03:08.143743+0000","last_unstale":"2026-03-08T23:04:28.149866+0000","last_undegraded":"2026-03-08T23:04:28.149866+0000","last_fullsized":"2026-03-08T23:04:28.149866+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_clean_scrub_stamp":"2026-03-08T23:03:07.117572+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:57:49.218220+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00076602300000000001,"stat_sum":{"num_bytes":389,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":73,"num_read_kb":68,"num_write":4,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,0],"acting":[3,7,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598839+0000","last_change":"2026-03-08T23:03:09.271372+0000","last_active":"2026-03-08T23:03:37.598839+0000","last_peered":"2026-03-08T23:03:37.598839+0000","last_clean":"2026-03-08T23:03:37.598839+0000","last_became_active":"2026-03-08T23:03:09.271293+0000","last_became_peered":"2026-03-08T23:03:09.271293+0000","last_unstale":"2026-03-08T23:03:37.598839+0000","last_undegraded":"2026-03-08T23:03:37.598839+0000","last_fullsized":"2026-03-08T23:03:37.598839+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:52:56.466257+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,7],"acting":[4,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"6.2","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598760+0000","last_change":"2026-03-08T23:03:11.322100+0000","last_active":"2026-03-08T23:03:37.598760+0000","last_peered":"2026-03-08T23:03:37.598760+0000","last_clean":"2026-03-08T23:03:37.598760+0000","last_became_active":"2026-03-08T23:03:11.321986+0000","last_became_peered":"2026-03-08T23:03:11.321986+0000","last_unstale":"2026-03-08T23:03:37.598760+0000","last_undegraded":"2026-03-08T23:03:37.598760+0000","last_fullsized":"2026-03-08T23:03:37.598760+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:01:44.960156+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.4","version":"63'30","reported_seq":96,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.397730+0000","last_change":"2026-03-08T23:03:07.146444+0000","last_active":"2026-03-08T23:04:20.397730+0000","last_peered":"2026-03-08T23:04:20.397730+0000","last_clean":"2026-03-08T23:04:20.397730+0000","last_became_active":"2026-03-08T23:03:07.146311+0000","last_became_peered":"2026-03-08T23:03:07.146311+0000","last_unstale":"2026-03-08T23:04:20.397730+0000","last_undegraded":"2026-03-08T23:04:20.397730+0000","last_fullsized":"2026-03-08T23:04:20.397730+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":30,"log_dups_size":0,"ondisk_log_size":30,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:45:41.090724+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":358,"num_objects":10,"num_object_clones":0,"num_object_copies":30,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":10,"num_whiteouts":0,"num_read":51,"num_read_kb":36,"num_write":26,"num_write_kb":4,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,5],"acting":[1,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.707797+0000","last_change":"2026-03-08T23:03:05.352431+0000","last_active":"2026-03-08T23:03:37.707797+0000","last_peered":"2026-03-08T23:03:37.707797+0000","last_clean":"2026-03-08T23:03:37.707797+0000","last_became_active":"2026-03-08T23:03:05.352037+0000","last_became_peered":"2026-03-08T23:03:05.352037+0000","last_unstale":"2026-03-08T23:03:37.707797+0000","last_undegraded":"2026-03-08T23:03:37.707797+0000","last_fullsized":"2026-03-08T23:03:37.707797+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:28:44.219812+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,4],"acting":[7,0,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.2","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703962+0000","last_change":"2026-03-08T23:03:09.266587+0000","last_active":"2026-03-08T23:03:37.703962+0000","last_peered":"2026-03-08T23:03:37.703962+0000","last_clean":"2026-03-08T23:03:37.703962+0000","last_became_active":"2026-03-08T23:03:09.266483+0000","last_became_peered":"2026-03-08T23:03:09.266483+0000","last_unstale":"2026-03-08T23:03:37.703962+0000","last_undegraded":"2026-03-08T23:03:37.703962+0000","last_fullsized":"2026-03-08T23:03:37.703962+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:23:31.451403+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703529+0000","last_change":"2026-03-08T23:03:11.569014+0000","last_active":"2026-03-08T23:03:37.703529+0000","last_peered":"2026-03-08T23:03:37.703529+0000","last_clean":"2026-03-08T23:03:37.703529+0000","last_became_active":"2026-03-08T23:03:11.568850+0000","last_became_peered":"2026-03-08T23:03:11.568850+0000","last_unstale":"2026-03-08T23:03:37.703529+0000","last_undegraded":"2026-03-08T23:03:37.703529+0000","last_fullsized":"2026-03-08T23:03:37.703529+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:52:40.696997+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,6,2],"acting":[1,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.5","version":"62'16","reported_seq":68,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.396260+0000","last_change":"2026-03-08T23:03:07.146923+0000","last_active":"2026-03-08T23:04:20.396260+0000","last_peered":"2026-03-08T23:04:20.396260+0000","last_clean":"2026-03-08T23:04:20.396260+0000","last_became_active":"2026-03-08T23:03:07.143329+0000","last_became_peered":"2026-03-08T23:03:07.143329+0000","last_unstale":"2026-03-08T23:04:20.396260+0000","last_undegraded":"2026-03-08T23:04:20.396260+0000","last_fullsized":"2026-03-08T23:04:20.396260+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":16,"log_dups_size":0,"ondisk_log_size":16,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:19:27.131443+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":154,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":25,"num_read_kb":15,"num_write":13,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,2],"acting":[5,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704256+0000","last_change":"2026-03-08T23:03:05.272548+0000","last_active":"2026-03-08T23:03:37.704256+0000","last_peered":"2026-03-08T23:03:37.704256+0000","last_clean":"2026-03-08T23:03:37.704256+0000","last_became_active":"2026-03-08T23:03:05.272360+0000","last_became_peered":"2026-03-08T23:03:05.272360+0000","last_unstale":"2026-03-08T23:03:37.704256+0000","last_undegraded":"2026-03-08T23:03:37.704256+0000","last_fullsized":"2026-03-08T23:03:37.704256+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:34:11.504201+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0,7],"acting":[1,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"4.2","version":"64'2","reported_seq":36,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704319+0000","last_change":"2026-03-08T23:03:14.661508+0000","last_active":"2026-03-08T23:03:37.704319+0000","last_peered":"2026-03-08T23:03:37.704319+0000","last_clean":"2026-03-08T23:03:37.704319+0000","last_became_active":"2026-03-08T23:03:08.142845+0000","last_became_peered":"2026-03-08T23:03:08.142845+0000","last_unstale":"2026-03-08T23:03:37.704319+0000","last_undegraded":"2026-03-08T23:03:37.704319+0000","last_fullsized":"2026-03-08T23:03:37.704319+0000","mapping_epoch":57,"log_start":"0'0","ondisk_log_start":"0'0","created":57,"last_epoch_clean":58,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:07.117572+0000","last_clean_scrub_stamp":"2026-03-08T23:03:07.117572+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:34:53.911312+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.001013176,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.3","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.397557+0000","last_change":"2026-03-08T23:03:09.262286+0000","last_active":"2026-03-08T23:04:20.397557+0000","last_peered":"2026-03-08T23:04:20.397557+0000","last_clean":"2026-03-08T23:04:20.397557+0000","last_became_active":"2026-03-08T23:03:09.262111+0000","last_became_peered":"2026-03-08T23:03:09.262111+0000","last_unstale":"2026-03-08T23:04:20.397557+0000","last_undegraded":"2026-03-08T23:04:20.397557+0000","last_fullsized":"2026-03-08T23:04:20.397557+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:05:50.392258+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,6,5],"acting":[0,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.0","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603277+0000","last_change":"2026-03-08T23:03:11.321709+0000","last_active":"2026-03-08T23:03:37.603277+0000","last_peered":"2026-03-08T23:03:37.603277+0000","last_clean":"2026-03-08T23:03:37.603277+0000","last_became_active":"2026-03-08T23:03:11.321586+0000","last_became_peered":"2026-03-08T23:03:11.321586+0000","last_unstale":"2026-03-08T23:03:37.603277+0000","last_undegraded":"2026-03-08T23:03:37.603277+0000","last_fullsized":"2026-03-08T23:03:37.603277+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:18:33.405135+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,3,2],"acting":[0,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.3","version":"62'19","reported_seq":65,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599055+0000","last_change":"2026-03-08T23:03:07.158372+0000","last_active":"2026-03-08T23:03:37.599055+0000","last_peered":"2026-03-08T23:03:37.599055+0000","last_clean":"2026-03-08T23:03:37.599055+0000","last_became_active":"2026-03-08T23:03:07.148537+0000","last_became_peered":"2026-03-08T23:03:07.148537+0000","last_unstale":"2026-03-08T23:03:37.599055+0000","last_undegraded":"2026-03-08T23:03:37.599055+0000","last_fullsized":"2026-03-08T23:03:37.599055+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":19,"log_dups_size":0,"ondisk_log_size":19,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:54:19.176556+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":330,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":39,"num_read_kb":25,"num_write":22,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,6],"acting":[4,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.2","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706602+0000","last_change":"2026-03-08T23:03:05.258324+0000","last_active":"2026-03-08T23:03:37.706602+0000","last_peered":"2026-03-08T23:03:37.706602+0000","last_clean":"2026-03-08T23:03:37.706602+0000","last_became_active":"2026-03-08T23:03:05.258194+0000","last_became_peered":"2026-03-08T23:03:05.258194+0000","last_unstale":"2026-03-08T23:03:37.706602+0000","last_undegraded":"2026-03-08T23:03:37.706602+0000","last_fullsized":"2026-03-08T23:03:37.706602+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:43:59.696656+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,6],"acting":[5,1,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.5","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603134+0000","last_change":"2026-03-08T23:03:09.257843+0000","last_active":"2026-03-08T23:03:37.603134+0000","last_peered":"2026-03-08T23:03:37.603134+0000","last_clean":"2026-03-08T23:03:37.603134+0000","last_became_active":"2026-03-08T23:03:09.257057+0000","last_became_peered":"2026-03-08T23:03:09.257057+0000","last_unstale":"2026-03-08T23:03:37.603134+0000","last_undegraded":"2026-03-08T23:03:37.603134+0000","last_fullsized":"2026-03-08T23:03:37.603134+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:26:30.827034+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1,4],"acting":[0,1,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.6","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705397+0000","last_change":"2026-03-08T23:03:11.322587+0000","last_active":"2026-03-08T23:03:37.705397+0000","last_peered":"2026-03-08T23:03:37.705397+0000","last_clean":"2026-03-08T23:03:37.705397+0000","last_became_active":"2026-03-08T23:03:11.321815+0000","last_became_peered":"2026-03-08T23:03:11.321815+0000","last_unstale":"2026-03-08T23:03:37.705397+0000","last_undegraded":"2026-03-08T23:03:37.705397+0000","last_fullsized":"2026-03-08T23:03:37.705397+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:06:24.094455+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,4,7],"acting":[3,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.0","version":"62'18","reported_seq":61,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703630+0000","last_change":"2026-03-08T23:03:07.143410+0000","last_active":"2026-03-08T23:03:37.703630+0000","last_peered":"2026-03-08T23:03:37.703630+0000","last_clean":"2026-03-08T23:03:37.703630+0000","last_became_active":"2026-03-08T23:03:07.143316+0000","last_became_peered":"2026-03-08T23:03:07.143316+0000","last_unstale":"2026-03-08T23:03:37.703630+0000","last_undegraded":"2026-03-08T23:03:37.703630+0000","last_fullsized":"2026-03-08T23:03:37.703630+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":18,"log_dups_size":0,"ondisk_log_size":18,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:53:40.400754+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":220,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":34,"num_read_kb":22,"num_write":20,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,6],"acting":[1,2,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604399+0000","last_change":"2026-03-08T23:03:05.271515+0000","last_active":"2026-03-08T23:03:37.604399+0000","last_peered":"2026-03-08T23:03:37.604399+0000","last_clean":"2026-03-08T23:03:37.604399+0000","last_became_active":"2026-03-08T23:03:05.258158+0000","last_became_peered":"2026-03-08T23:03:05.258158+0000","last_unstale":"2026-03-08T23:03:37.604399+0000","last_undegraded":"2026-03-08T23:03:37.604399+0000","last_fullsized":"2026-03-08T23:03:37.604399+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:08:43.394496+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,0],"acting":[2,3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.6","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604463+0000","last_change":"2026-03-08T23:03:09.269852+0000","last_active":"2026-03-08T23:03:37.604463+0000","last_peered":"2026-03-08T23:03:37.604463+0000","last_clean":"2026-03-08T23:03:37.604463+0000","last_became_active":"2026-03-08T23:03:09.269760+0000","last_became_peered":"2026-03-08T23:03:09.269760+0000","last_unstale":"2026-03-08T23:03:37.604463+0000","last_undegraded":"2026-03-08T23:03:37.604463+0000","last_fullsized":"2026-03-08T23:03:37.604463+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:23:48.614808+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,7],"acting":[2,5,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.5","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704723+0000","last_change":"2026-03-08T23:03:11.572880+0000","last_active":"2026-03-08T23:03:37.704723+0000","last_peered":"2026-03-08T23:03:37.704723+0000","last_clean":"2026-03-08T23:03:37.704723+0000","last_became_active":"2026-03-08T23:03:11.572659+0000","last_became_peered":"2026-03-08T23:03:11.572659+0000","last_unstale":"2026-03-08T23:03:37.704723+0000","last_undegraded":"2026-03-08T23:03:37.704723+0000","last_fullsized":"2026-03-08T23:03:37.704723+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:01:18.837585+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,3],"acting":[7,6,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.1","version":"62'14","reported_seq":50,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605037+0000","last_change":"2026-03-08T23:03:07.150136+0000","last_active":"2026-03-08T23:03:37.605037+0000","last_peered":"2026-03-08T23:03:37.605037+0000","last_clean":"2026-03-08T23:03:37.605037+0000","last_became_active":"2026-03-08T23:03:07.149918+0000","last_became_peered":"2026-03-08T23:03:07.149918+0000","last_unstale":"2026-03-08T23:03:37.605037+0000","last_undegraded":"2026-03-08T23:03:37.605037+0000","last_fullsized":"2026-03-08T23:03:37.605037+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":14,"log_dups_size":0,"ondisk_log_size":14,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:03:31.384337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":21,"num_read_kb":14,"num_write":14,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704309+0000","last_change":"2026-03-08T23:03:05.352768+0000","last_active":"2026-03-08T23:03:37.704309+0000","last_peered":"2026-03-08T23:03:37.704309+0000","last_clean":"2026-03-08T23:03:37.704309+0000","last_became_active":"2026-03-08T23:03:05.352289+0000","last_became_peered":"2026-03-08T23:03:05.352289+0000","last_unstale":"2026-03-08T23:03:37.704309+0000","last_undegraded":"2026-03-08T23:03:37.704309+0000","last_fullsized":"2026-03-08T23:03:37.704309+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:13:37.446786+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,1,0],"acting":[7,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.7","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706772+0000","last_change":"2026-03-08T23:03:09.259498+0000","last_active":"2026-03-08T23:03:37.706772+0000","last_peered":"2026-03-08T23:03:37.706772+0000","last_clean":"2026-03-08T23:03:37.706772+0000","last_became_active":"2026-03-08T23:03:09.259373+0000","last_became_peered":"2026-03-08T23:03:09.259373+0000","last_unstale":"2026-03-08T23:03:37.706772+0000","last_undegraded":"2026-03-08T23:03:37.706772+0000","last_fullsized":"2026-03-08T23:03:37.706772+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:14:51.399825+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.4","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704060+0000","last_change":"2026-03-08T23:03:11.322809+0000","last_active":"2026-03-08T23:03:37.704060+0000","last_peered":"2026-03-08T23:03:37.704060+0000","last_clean":"2026-03-08T23:03:37.704060+0000","last_became_active":"2026-03-08T23:03:11.322694+0000","last_became_peered":"2026-03-08T23:03:11.322694+0000","last_unstale":"2026-03-08T23:03:37.704060+0000","last_undegraded":"2026-03-08T23:03:37.704060+0000","last_fullsized":"2026-03-08T23:03:37.704060+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:23:45.443972+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.2","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705996+0000","last_change":"2026-03-08T23:03:07.144247+0000","last_active":"2026-03-08T23:03:37.705996+0000","last_peered":"2026-03-08T23:03:37.705996+0000","last_clean":"2026-03-08T23:03:37.705996+0000","last_became_active":"2026-03-08T23:03:07.144056+0000","last_became_peered":"2026-03-08T23:03:07.144056+0000","last_unstale":"2026-03-08T23:03:37.705996+0000","last_undegraded":"2026-03-08T23:03:37.705996+0000","last_fullsized":"2026-03-08T23:03:37.705996+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:13:49.239945+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,5,6],"acting":[3,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.3","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706226+0000","last_change":"2026-03-08T23:03:05.261439+0000","last_active":"2026-03-08T23:03:37.706226+0000","last_peered":"2026-03-08T23:03:37.706226+0000","last_clean":"2026-03-08T23:03:37.706226+0000","last_became_active":"2026-03-08T23:03:05.261305+0000","last_became_peered":"2026-03-08T23:03:05.261305+0000","last_unstale":"2026-03-08T23:03:37.706226+0000","last_undegraded":"2026-03-08T23:03:37.706226+0000","last_fullsized":"2026-03-08T23:03:37.706226+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:04:41.881046+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2,7],"acting":[5,2,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"1.0","version":"66'39","reported_seq":68,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:39.956568+0000","last_change":"2026-03-08T23:02:44.478154+0000","last_active":"2026-03-08T23:03:39.956568+0000","last_peered":"2026-03-08T23:03:39.956568+0000","last_clean":"2026-03-08T23:03:39.956568+0000","last_became_active":"2026-03-08T23:02:44.468283+0000","last_became_peered":"2026-03-08T23:02:44.468283+0000","last_unstale":"2026-03-08T23:03:39.956568+0000","last_undegraded":"2026-03-08T23:03:39.956568+0000","last_fullsized":"2026-03-08T23:03:39.956568+0000","mapping_epoch":52,"log_start":"0'0","ondisk_log_start":"0'0","created":20,"last_epoch_clean":53,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T22:59:49.649662+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T22:59:49.649662+0000","last_clean_scrub_stamp":"2026-03-08T22:59:49.649662+0000","objects_scrubbed":0,"log_size":39,"log_dups_size":0,"ondisk_log_size":39,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:14:01.275386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,0,6],"acting":[7,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"5.4","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704463+0000","last_change":"2026-03-08T23:03:09.276138+0000","last_active":"2026-03-08T23:03:37.704463+0000","last_peered":"2026-03-08T23:03:37.704463+0000","last_clean":"2026-03-08T23:03:37.704463+0000","last_became_active":"2026-03-08T23:03:09.276065+0000","last_became_peered":"2026-03-08T23:03:09.276065+0000","last_unstale":"2026-03-08T23:03:37.704463+0000","last_undegraded":"2026-03-08T23:03:37.704463+0000","last_fullsized":"2026-03-08T23:03:37.704463+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:45:59.167461+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,5],"acting":[7,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.7","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706137+0000","last_change":"2026-03-08T23:03:11.326007+0000","last_active":"2026-03-08T23:03:37.706137+0000","last_peered":"2026-03-08T23:03:37.706137+0000","last_clean":"2026-03-08T23:03:37.706137+0000","last_became_active":"2026-03-08T23:03:11.325511+0000","last_became_peered":"2026-03-08T23:03:11.325511+0000","last_unstale":"2026-03-08T23:03:37.706137+0000","last_undegraded":"2026-03-08T23:03:37.706137+0000","last_fullsized":"2026-03-08T23:03:37.706137+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:34:48.464423+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,4],"acting":[5,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.d","version":"62'17","reported_seq":57,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703606+0000","last_change":"2026-03-08T23:03:07.144995+0000","last_active":"2026-03-08T23:03:37.703606+0000","last_peered":"2026-03-08T23:03:37.703606+0000","last_clean":"2026-03-08T23:03:37.703606+0000","last_became_active":"2026-03-08T23:03:07.143922+0000","last_became_peered":"2026-03-08T23:03:07.143922+0000","last_unstale":"2026-03-08T23:03:37.703606+0000","last_undegraded":"2026-03-08T23:03:37.703606+0000","last_fullsized":"2026-03-08T23:03:37.703606+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":17,"log_dups_size":0,"ondisk_log_size":17,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:29:17.453857+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":29,"num_read_kb":19,"num_write":18,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,6],"acting":[7,5,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.c","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604333+0000","last_change":"2026-03-08T23:03:05.261381+0000","last_active":"2026-03-08T23:03:37.604333+0000","last_peered":"2026-03-08T23:03:37.604333+0000","last_clean":"2026-03-08T23:03:37.604333+0000","last_became_active":"2026-03-08T23:03:05.261221+0000","last_became_peered":"2026-03-08T23:03:05.261221+0000","last_unstale":"2026-03-08T23:03:37.604333+0000","last_undegraded":"2026-03-08T23:03:37.604333+0000","last_fullsized":"2026-03-08T23:03:37.604333+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:21:09.467745+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,5,0],"acting":[2,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.b","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604355+0000","last_change":"2026-03-08T23:03:09.260311+0000","last_active":"2026-03-08T23:03:37.604355+0000","last_peered":"2026-03-08T23:03:37.604355+0000","last_clean":"2026-03-08T23:03:37.604355+0000","last_became_active":"2026-03-08T23:03:09.260169+0000","last_became_peered":"2026-03-08T23:03:09.260169+0000","last_unstale":"2026-03-08T23:03:37.604355+0000","last_undegraded":"2026-03-08T23:03:37.604355+0000","last_fullsized":"2026-03-08T23:03:37.604355+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T11:00:09.782806+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,5],"acting":[2,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.8","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703666+0000","last_change":"2026-03-08T23:03:11.330620+0000","last_active":"2026-03-08T23:03:37.703666+0000","last_peered":"2026-03-08T23:03:37.703666+0000","last_clean":"2026-03-08T23:03:37.703666+0000","last_became_active":"2026-03-08T23:03:11.328178+0000","last_became_peered":"2026-03-08T23:03:11.328178+0000","last_unstale":"2026-03-08T23:03:37.703666+0000","last_undegraded":"2026-03-08T23:03:37.703666+0000","last_fullsized":"2026-03-08T23:03:37.703666+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:43:49.415980+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,3],"acting":[7,2,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.c","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706629+0000","last_change":"2026-03-08T23:03:07.147284+0000","last_active":"2026-03-08T23:03:37.706629+0000","last_peered":"2026-03-08T23:03:37.706629+0000","last_clean":"2026-03-08T23:03:37.706629+0000","last_became_active":"2026-03-08T23:03:07.147125+0000","last_became_peered":"2026-03-08T23:03:07.147125+0000","last_unstale":"2026-03-08T23:03:37.706629+0000","last_undegraded":"2026-03-08T23:03:37.706629+0000","last_fullsized":"2026-03-08T23:03:37.706629+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:08:08.030833+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,6],"acting":[5,3,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.d","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704184+0000","last_change":"2026-03-08T23:03:05.253779+0000","last_active":"2026-03-08T23:03:37.704184+0000","last_peered":"2026-03-08T23:03:37.704184+0000","last_clean":"2026-03-08T23:03:37.704184+0000","last_became_active":"2026-03-08T23:03:05.253599+0000","last_became_peered":"2026-03-08T23:03:05.253599+0000","last_unstale":"2026-03-08T23:03:37.704184+0000","last_undegraded":"2026-03-08T23:03:37.704184+0000","last_fullsized":"2026-03-08T23:03:37.704184+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:24:25.849096+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,3],"acting":[1,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.a","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604804+0000","last_change":"2026-03-08T23:03:09.263804+0000","last_active":"2026-03-08T23:03:37.604804+0000","last_peered":"2026-03-08T23:03:37.604804+0000","last_clean":"2026-03-08T23:03:37.604804+0000","last_became_active":"2026-03-08T23:03:09.263712+0000","last_became_peered":"2026-03-08T23:03:09.263712+0000","last_unstale":"2026-03-08T23:03:37.604804+0000","last_undegraded":"2026-03-08T23:03:37.604804+0000","last_fullsized":"2026-03-08T23:03:37.604804+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:58:29.178345+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,3],"acting":[2,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.9","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602316+0000","last_change":"2026-03-08T23:03:11.306247+0000","last_active":"2026-03-08T23:03:37.602316+0000","last_peered":"2026-03-08T23:03:37.602316+0000","last_clean":"2026-03-08T23:03:37.602316+0000","last_became_active":"2026-03-08T23:03:11.306008+0000","last_became_peered":"2026-03-08T23:03:11.306008+0000","last_unstale":"2026-03-08T23:03:37.602316+0000","last_undegraded":"2026-03-08T23:03:37.602316+0000","last_fullsized":"2026-03-08T23:03:37.602316+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:37:02.509976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.f","version":"63'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704111+0000","last_change":"2026-03-08T23:03:07.143719+0000","last_active":"2026-03-08T23:03:37.704111+0000","last_peered":"2026-03-08T23:03:37.704111+0000","last_clean":"2026-03-08T23:03:37.704111+0000","last_became_active":"2026-03-08T23:03:07.142277+0000","last_became_peered":"2026-03-08T23:03:07.142277+0000","last_unstale":"2026-03-08T23:03:37.704111+0000","last_undegraded":"2026-03-08T23:03:37.704111+0000","last_fullsized":"2026-03-08T23:03:37.704111+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:11:36.105278+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,0],"acting":[7,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.e","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604346+0000","last_change":"2026-03-08T23:03:05.257448+0000","last_active":"2026-03-08T23:03:37.604346+0000","last_peered":"2026-03-08T23:03:37.604346+0000","last_clean":"2026-03-08T23:03:37.604346+0000","last_became_active":"2026-03-08T23:03:05.257269+0000","last_became_peered":"2026-03-08T23:03:05.257269+0000","last_unstale":"2026-03-08T23:03:37.604346+0000","last_undegraded":"2026-03-08T23:03:37.604346+0000","last_fullsized":"2026-03-08T23:03:37.604346+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:40:22.064448+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,3,7],"acting":[2,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.9","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.396441+0000","last_change":"2026-03-08T23:03:09.276273+0000","last_active":"2026-03-08T23:04:20.396441+0000","last_peered":"2026-03-08T23:04:20.396441+0000","last_clean":"2026-03-08T23:04:20.396441+0000","last_became_active":"2026-03-08T23:03:09.276207+0000","last_became_peered":"2026-03-08T23:03:09.276207+0000","last_unstale":"2026-03-08T23:04:20.396441+0000","last_undegraded":"2026-03-08T23:04:20.396441+0000","last_fullsized":"2026-03-08T23:04:20.396441+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:37:08.341613+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.a","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705072+0000","last_change":"2026-03-08T23:03:11.569969+0000","last_active":"2026-03-08T23:03:37.705072+0000","last_peered":"2026-03-08T23:03:37.705072+0000","last_clean":"2026-03-08T23:03:37.705072+0000","last_became_active":"2026-03-08T23:03:11.569732+0000","last_became_peered":"2026-03-08T23:03:11.569732+0000","last_unstale":"2026-03-08T23:03:37.705072+0000","last_undegraded":"2026-03-08T23:03:37.705072+0000","last_fullsized":"2026-03-08T23:03:37.705072+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:44:10.961590+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,0],"acting":[5,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"3.e","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704195+0000","last_change":"2026-03-08T23:03:07.145142+0000","last_active":"2026-03-08T23:03:37.704195+0000","last_peered":"2026-03-08T23:03:37.704195+0000","last_clean":"2026-03-08T23:03:37.704195+0000","last_became_active":"2026-03-08T23:03:07.141502+0000","last_became_peered":"2026-03-08T23:03:07.141502+0000","last_unstale":"2026-03-08T23:03:37.704195+0000","last_undegraded":"2026-03-08T23:03:37.704195+0000","last_fullsized":"2026-03-08T23:03:37.704195+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T09:38:18.481763+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,1],"acting":[7,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.f","version":"55'2","reported_seq":49,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599217+0000","last_change":"2026-03-08T23:03:05.256739+0000","last_active":"2026-03-08T23:03:37.599217+0000","last_peered":"2026-03-08T23:03:37.599217+0000","last_clean":"2026-03-08T23:03:37.599217+0000","last_became_active":"2026-03-08T23:03:05.256600+0000","last_became_peered":"2026-03-08T23:03:05.256600+0000","last_unstale":"2026-03-08T23:03:37.599217+0000","last_undegraded":"2026-03-08T23:03:37.599217+0000","last_fullsized":"2026-03-08T23:03:37.599217+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:36:37.504041+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":92,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":14,"num_read_kb":14,"num_write":4,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,0,7],"acting":[4,0,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.8","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604750+0000","last_change":"2026-03-08T23:03:09.255626+0000","last_active":"2026-03-08T23:03:37.604750+0000","last_peered":"2026-03-08T23:03:37.604750+0000","last_clean":"2026-03-08T23:03:37.604750+0000","last_became_active":"2026-03-08T23:03:09.255474+0000","last_became_peered":"2026-03-08T23:03:09.255474+0000","last_unstale":"2026-03-08T23:03:37.604750+0000","last_undegraded":"2026-03-08T23:03:37.604750+0000","last_fullsized":"2026-03-08T23:03:37.604750+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T10:39:05.509820+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,0,1],"acting":[2,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"6.b","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705379+0000","last_change":"2026-03-08T23:03:11.328263+0000","last_active":"2026-03-08T23:03:37.705379+0000","last_peered":"2026-03-08T23:03:37.705379+0000","last_clean":"2026-03-08T23:03:37.705379+0000","last_became_active":"2026-03-08T23:03:11.324286+0000","last_became_peered":"2026-03-08T23:03:11.324286+0000","last_unstale":"2026-03-08T23:03:37.705379+0000","last_undegraded":"2026-03-08T23:03:37.705379+0000","last_fullsized":"2026-03-08T23:03:37.705379+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:13:09.153031+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,1],"acting":[3,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.11","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703500+0000","last_change":"2026-03-08T23:03:07.144009+0000","last_active":"2026-03-08T23:03:37.703500+0000","last_peered":"2026-03-08T23:03:37.703500+0000","last_clean":"2026-03-08T23:03:37.703500+0000","last_became_active":"2026-03-08T23:03:07.143612+0000","last_became_peered":"2026-03-08T23:03:07.143612+0000","last_unstale":"2026-03-08T23:03:37.703500+0000","last_undegraded":"2026-03-08T23:03:37.703500+0000","last_fullsized":"2026-03-08T23:03:37.703500+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:25:47.730982+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.10","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604253+0000","last_change":"2026-03-08T23:03:05.258314+0000","last_active":"2026-03-08T23:03:37.604253+0000","last_peered":"2026-03-08T23:03:37.604253+0000","last_clean":"2026-03-08T23:03:37.604253+0000","last_became_active":"2026-03-08T23:03:05.257953+0000","last_became_peered":"2026-03-08T23:03:05.257953+0000","last_unstale":"2026-03-08T23:03:37.604253+0000","last_undegraded":"2026-03-08T23:03:37.604253+0000","last_fullsized":"2026-03-08T23:03:37.604253+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:38:01.823337+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1,0],"acting":[2,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"5.17","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706439+0000","last_change":"2026-03-08T23:03:09.274088+0000","last_active":"2026-03-08T23:03:37.706439+0000","last_peered":"2026-03-08T23:03:37.706439+0000","last_clean":"2026-03-08T23:03:37.706439+0000","last_became_active":"2026-03-08T23:03:09.273987+0000","last_became_peered":"2026-03-08T23:03:09.273987+0000","last_unstale":"2026-03-08T23:03:37.706439+0000","last_undegraded":"2026-03-08T23:03:37.706439+0000","last_fullsized":"2026-03-08T23:03:37.706439+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:22:11.788506+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1,7],"acting":[3,1,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.14","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.604276+0000","last_change":"2026-03-08T23:03:11.309936+0000","last_active":"2026-03-08T23:03:37.604276+0000","last_peered":"2026-03-08T23:03:37.604276+0000","last_clean":"2026-03-08T23:03:37.604276+0000","last_became_active":"2026-03-08T23:03:11.308216+0000","last_became_peered":"2026-03-08T23:03:11.308216+0000","last_unstale":"2026-03-08T23:03:37.604276+0000","last_undegraded":"2026-03-08T23:03:37.604276+0000","last_fullsized":"2026-03-08T23:03:37.604276+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:38:50.167911+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,4,7],"acting":[2,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[]},{"pgid":"3.10","version":"62'4","reported_seq":35,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703585+0000","last_change":"2026-03-08T23:03:07.151427+0000","last_active":"2026-03-08T23:03:37.703585+0000","last_peered":"2026-03-08T23:03:37.703585+0000","last_clean":"2026-03-08T23:03:37.703585+0000","last_became_active":"2026-03-08T23:03:07.151150+0000","last_became_peered":"2026-03-08T23:03:07.151150+0000","last_unstale":"2026-03-08T23:03:37.703585+0000","last_undegraded":"2026-03-08T23:03:37.703585+0000","last_fullsized":"2026-03-08T23:03:37.703585+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":4,"log_dups_size":0,"ondisk_log_size":4,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:24:31.401635+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":6,"num_read_kb":4,"num_write":4,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,0,5],"acting":[6,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"2.11","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703550+0000","last_change":"2026-03-08T23:03:05.258410+0000","last_active":"2026-03-08T23:03:37.703550+0000","last_peered":"2026-03-08T23:03:37.703550+0000","last_clean":"2026-03-08T23:03:37.703550+0000","last_became_active":"2026-03-08T23:03:05.258203+0000","last_became_peered":"2026-03-08T23:03:05.258203+0000","last_unstale":"2026-03-08T23:03:37.703550+0000","last_undegraded":"2026-03-08T23:03:37.703550+0000","last_fullsized":"2026-03-08T23:03:37.703550+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:34:43.426775+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,1],"acting":[6,4,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.16","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706916+0000","last_change":"2026-03-08T23:03:09.266168+0000","last_active":"2026-03-08T23:03:37.706916+0000","last_peered":"2026-03-08T23:03:37.706916+0000","last_clean":"2026-03-08T23:03:37.706916+0000","last_became_active":"2026-03-08T23:03:09.262680+0000","last_became_peered":"2026-03-08T23:03:09.262680+0000","last_unstale":"2026-03-08T23:03:37.706916+0000","last_undegraded":"2026-03-08T23:03:37.706916+0000","last_fullsized":"2026-03-08T23:03:37.706916+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:12:56.958547+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,1],"acting":[5,3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.15","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704825+0000","last_change":"2026-03-08T23:03:11.573043+0000","last_active":"2026-03-08T23:03:37.704825+0000","last_peered":"2026-03-08T23:03:37.704825+0000","last_clean":"2026-03-08T23:03:37.704825+0000","last_became_active":"2026-03-08T23:03:11.572931+0000","last_became_peered":"2026-03-08T23:03:11.572931+0000","last_unstale":"2026-03-08T23:03:37.704825+0000","last_undegraded":"2026-03-08T23:03:37.704825+0000","last_fullsized":"2026-03-08T23:03:37.704825+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:06:15.485642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,6,4],"acting":[7,6,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.13","version":"62'11","reported_seq":48,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703814+0000","last_change":"2026-03-08T23:03:07.143810+0000","last_active":"2026-03-08T23:03:37.703814+0000","last_peered":"2026-03-08T23:03:37.703814+0000","last_clean":"2026-03-08T23:03:37.703814+0000","last_became_active":"2026-03-08T23:03:07.142444+0000","last_became_peered":"2026-03-08T23:03:07.142444+0000","last_unstale":"2026-03-08T23:03:37.703814+0000","last_undegraded":"2026-03-08T23:03:37.703814+0000","last_fullsized":"2026-03-08T23:03:37.703814+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:01:50.409742+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":20,"num_read_kb":13,"num_write":12,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,2],"acting":[7,4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.12","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706424+0000","last_change":"2026-03-08T23:03:05.267361+0000","last_active":"2026-03-08T23:03:37.706424+0000","last_peered":"2026-03-08T23:03:37.706424+0000","last_clean":"2026-03-08T23:03:37.706424+0000","last_became_active":"2026-03-08T23:03:05.267277+0000","last_became_peered":"2026-03-08T23:03:05.267277+0000","last_unstale":"2026-03-08T23:03:37.706424+0000","last_undegraded":"2026-03-08T23:03:37.706424+0000","last_fullsized":"2026-03-08T23:03:37.706424+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:07:04.245505+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3,7],"acting":[5,3,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.15","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.396140+0000","last_change":"2026-03-08T23:03:09.259441+0000","last_active":"2026-03-08T23:04:20.396140+0000","last_peered":"2026-03-08T23:04:20.396140+0000","last_clean":"2026-03-08T23:04:20.396140+0000","last_became_active":"2026-03-08T23:03:09.259276+0000","last_became_peered":"2026-03-08T23:03:09.259276+0000","last_unstale":"2026-03-08T23:04:20.396140+0000","last_undegraded":"2026-03-08T23:04:20.396140+0000","last_fullsized":"2026-03-08T23:04:20.396140+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:07:44.244730+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1,0],"acting":[5,1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"6.16","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602444+0000","last_change":"2026-03-08T23:03:11.320671+0000","last_active":"2026-03-08T23:03:37.602444+0000","last_peered":"2026-03-08T23:03:37.602444+0000","last_clean":"2026-03-08T23:03:37.602444+0000","last_became_active":"2026-03-08T23:03:11.320537+0000","last_became_peered":"2026-03-08T23:03:11.320537+0000","last_unstale":"2026-03-08T23:03:37.602444+0000","last_undegraded":"2026-03-08T23:03:37.602444+0000","last_fullsized":"2026-03-08T23:03:37.602444+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:11:41.849283+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.12","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603023+0000","last_change":"2026-03-08T23:03:07.139366+0000","last_active":"2026-03-08T23:03:37.603023+0000","last_peered":"2026-03-08T23:03:37.603023+0000","last_clean":"2026-03-08T23:03:37.603023+0000","last_became_active":"2026-03-08T23:03:07.138442+0000","last_became_peered":"2026-03-08T23:03:07.138442+0000","last_unstale":"2026-03-08T23:03:37.603023+0000","last_undegraded":"2026-03-08T23:03:37.603023+0000","last_fullsized":"2026-03-08T23:03:37.603023+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:25:17.244273+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,3],"acting":[0,7,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.13","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602548+0000","last_change":"2026-03-08T23:03:05.265019+0000","last_active":"2026-03-08T23:03:37.602548+0000","last_peered":"2026-03-08T23:03:37.602548+0000","last_clean":"2026-03-08T23:03:37.602548+0000","last_became_active":"2026-03-08T23:03:05.256390+0000","last_became_peered":"2026-03-08T23:03:05.256390+0000","last_unstale":"2026-03-08T23:03:37.602548+0000","last_undegraded":"2026-03-08T23:03:37.602548+0000","last_fullsized":"2026-03-08T23:03:37.602548+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:55:55.988563+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":436,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,3],"acting":[0,4,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.14","version":"63'11","reported_seq":52,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.397314+0000","last_change":"2026-03-08T23:03:09.270946+0000","last_active":"2026-03-08T23:04:20.397314+0000","last_peered":"2026-03-08T23:04:20.397314+0000","last_clean":"2026-03-08T23:04:20.397314+0000","last_became_active":"2026-03-08T23:03:09.270872+0000","last_became_peered":"2026-03-08T23:03:09.270872+0000","last_unstale":"2026-03-08T23:04:20.397314+0000","last_undegraded":"2026-03-08T23:04:20.397314+0000","last_fullsized":"2026-03-08T23:04:20.397314+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:03:28.504294+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,2],"acting":[3,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.17","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598900+0000","last_change":"2026-03-08T23:03:11.309563+0000","last_active":"2026-03-08T23:03:37.598900+0000","last_peered":"2026-03-08T23:03:37.598900+0000","last_clean":"2026-03-08T23:03:37.598900+0000","last_became_active":"2026-03-08T23:03:11.309130+0000","last_became_peered":"2026-03-08T23:03:11.309130+0000","last_unstale":"2026-03-08T23:03:37.598900+0000","last_undegraded":"2026-03-08T23:03:37.598900+0000","last_fullsized":"2026-03-08T23:03:37.598900+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:16:53.939886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2,5],"acting":[4,2,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"3.15","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703431+0000","last_change":"2026-03-08T23:03:07.147429+0000","last_active":"2026-03-08T23:03:37.703431+0000","last_peered":"2026-03-08T23:03:37.703431+0000","last_clean":"2026-03-08T23:03:37.703431+0000","last_became_active":"2026-03-08T23:03:07.145588+0000","last_became_peered":"2026-03-08T23:03:07.145588+0000","last_unstale":"2026-03-08T23:03:37.703431+0000","last_undegraded":"2026-03-08T23:03:37.703431+0000","last_fullsized":"2026-03-08T23:03:37.703431+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:45:10.275131+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,3,4],"acting":[7,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"2.14","version":"55'1","reported_seq":41,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703485+0000","last_change":"2026-03-08T23:03:05.262476+0000","last_active":"2026-03-08T23:03:37.703485+0000","last_peered":"2026-03-08T23:03:37.703485+0000","last_clean":"2026-03-08T23:03:37.703485+0000","last_became_active":"2026-03-08T23:03:05.261974+0000","last_became_peered":"2026-03-08T23:03:05.261974+0000","last_unstale":"2026-03-08T23:03:37.703485+0000","last_undegraded":"2026-03-08T23:03:37.703485+0000","last_fullsized":"2026-03-08T23:03:37.703485+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:54:13.829359+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":993,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":10,"num_read_kb":10,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,3,5],"acting":[6,3,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.13","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705784+0000","last_change":"2026-03-08T23:03:09.266934+0000","last_active":"2026-03-08T23:03:37.705784+0000","last_peered":"2026-03-08T23:03:37.705784+0000","last_clean":"2026-03-08T23:03:37.705784+0000","last_became_active":"2026-03-08T23:03:09.266834+0000","last_became_peered":"2026-03-08T23:03:09.266834+0000","last_unstale":"2026-03-08T23:03:37.705784+0000","last_undegraded":"2026-03-08T23:03:37.705784+0000","last_fullsized":"2026-03-08T23:03:37.705784+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T06:24:53.488537+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.10","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.602355+0000","last_change":"2026-03-08T23:03:11.308951+0000","last_active":"2026-03-08T23:03:37.602355+0000","last_peered":"2026-03-08T23:03:37.602355+0000","last_clean":"2026-03-08T23:03:37.602355+0000","last_became_active":"2026-03-08T23:03:11.308812+0000","last_became_peered":"2026-03-08T23:03:11.308812+0000","last_unstale":"2026-03-08T23:03:37.602355+0000","last_undegraded":"2026-03-08T23:03:37.602355+0000","last_fullsized":"2026-03-08T23:03:37.602355+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:19:48.617927+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,1],"acting":[0,5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"3.14","version":"62'10","reported_seq":44,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598450+0000","last_change":"2026-03-08T23:03:07.148637+0000","last_active":"2026-03-08T23:03:37.598450+0000","last_peered":"2026-03-08T23:03:37.598450+0000","last_clean":"2026-03-08T23:03:37.598450+0000","last_became_active":"2026-03-08T23:03:07.146995+0000","last_became_peered":"2026-03-08T23:03:07.146995+0000","last_unstale":"2026-03-08T23:03:37.598450+0000","last_undegraded":"2026-03-08T23:03:37.598450+0000","last_fullsized":"2026-03-08T23:03:37.598450+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":10,"log_dups_size":0,"ondisk_log_size":10,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:29:19.989973+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":5,"num_object_clones":0,"num_object_copies":15,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":5,"num_whiteouts":0,"num_read":15,"num_read_kb":10,"num_write":10,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,7,6],"acting":[4,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.15","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704029+0000","last_change":"2026-03-08T23:03:05.272759+0000","last_active":"2026-03-08T23:03:37.704029+0000","last_peered":"2026-03-08T23:03:37.704029+0000","last_clean":"2026-03-08T23:03:37.704029+0000","last_became_active":"2026-03-08T23:03:05.272579+0000","last_became_peered":"2026-03-08T23:03:05.272579+0000","last_unstale":"2026-03-08T23:03:37.704029+0000","last_undegraded":"2026-03-08T23:03:37.704029+0000","last_fullsized":"2026-03-08T23:03:37.704029+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:15:34.349919+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,0],"acting":[1,5,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"5.12","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703999+0000","last_change":"2026-03-08T23:03:09.268588+0000","last_active":"2026-03-08T23:03:37.703999+0000","last_peered":"2026-03-08T23:03:37.703999+0000","last_clean":"2026-03-08T23:03:37.703999+0000","last_became_active":"2026-03-08T23:03:09.267944+0000","last_became_peered":"2026-03-08T23:03:09.267944+0000","last_unstale":"2026-03-08T23:03:37.703999+0000","last_undegraded":"2026-03-08T23:03:37.703999+0000","last_fullsized":"2026-03-08T23:03:37.703999+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:10:07.451790+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,3],"acting":[1,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.11","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705450+0000","last_change":"2026-03-08T23:03:11.329191+0000","last_active":"2026-03-08T23:03:37.705450+0000","last_peered":"2026-03-08T23:03:37.705450+0000","last_clean":"2026-03-08T23:03:37.705450+0000","last_became_active":"2026-03-08T23:03:11.320409+0000","last_became_peered":"2026-03-08T23:03:11.320409+0000","last_unstale":"2026-03-08T23:03:37.705450+0000","last_undegraded":"2026-03-08T23:03:37.705450+0000","last_fullsized":"2026-03-08T23:03:37.705450+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:11:25.131608+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,5],"acting":[3,0,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.17","version":"62'6","reported_seq":38,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605206+0000","last_change":"2026-03-08T23:03:07.143064+0000","last_active":"2026-03-08T23:03:37.605206+0000","last_peered":"2026-03-08T23:03:37.605206+0000","last_clean":"2026-03-08T23:03:37.605206+0000","last_became_active":"2026-03-08T23:03:07.142940+0000","last_became_peered":"2026-03-08T23:03:07.142940+0000","last_unstale":"2026-03-08T23:03:37.605206+0000","last_undegraded":"2026-03-08T23:03:37.605206+0000","last_fullsized":"2026-03-08T23:03:37.605206+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":6,"log_dups_size":0,"ondisk_log_size":6,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T11:00:44.970445+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":9,"num_read_kb":6,"num_write":6,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,5,3],"acting":[0,5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"2.16","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706385+0000","last_change":"2026-03-08T23:03:05.260055+0000","last_active":"2026-03-08T23:03:37.706385+0000","last_peered":"2026-03-08T23:03:37.706385+0000","last_clean":"2026-03-08T23:03:37.706385+0000","last_became_active":"2026-03-08T23:03:05.259828+0000","last_became_peered":"2026-03-08T23:03:05.259828+0000","last_unstale":"2026-03-08T23:03:37.706385+0000","last_undegraded":"2026-03-08T23:03:37.706385+0000","last_fullsized":"2026-03-08T23:03:37.706385+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:28:36.850293+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,6,2],"acting":[5,6,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.11","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703146+0000","last_change":"2026-03-08T23:03:09.269178+0000","last_active":"2026-03-08T23:03:37.703146+0000","last_peered":"2026-03-08T23:03:37.703146+0000","last_clean":"2026-03-08T23:03:37.703146+0000","last_became_active":"2026-03-08T23:03:09.269028+0000","last_became_peered":"2026-03-08T23:03:09.269028+0000","last_unstale":"2026-03-08T23:03:37.703146+0000","last_undegraded":"2026-03-08T23:03:37.703146+0000","last_fullsized":"2026-03-08T23:03:37.703146+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:47:42.502854+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.12","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704767+0000","last_change":"2026-03-08T23:03:11.310893+0000","last_active":"2026-03-08T23:03:37.704767+0000","last_peered":"2026-03-08T23:03:37.704767+0000","last_clean":"2026-03-08T23:03:37.704767+0000","last_became_active":"2026-03-08T23:03:11.310635+0000","last_became_peered":"2026-03-08T23:03:11.310635+0000","last_unstale":"2026-03-08T23:03:37.704767+0000","last_undegraded":"2026-03-08T23:03:37.704767+0000","last_fullsized":"2026-03-08T23:03:37.704767+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:45:44.342211+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,2,4],"acting":[7,2,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.16","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706442+0000","last_change":"2026-03-08T23:03:07.150402+0000","last_active":"2026-03-08T23:03:37.706442+0000","last_peered":"2026-03-08T23:03:37.706442+0000","last_clean":"2026-03-08T23:03:37.706442+0000","last_became_active":"2026-03-08T23:03:07.150221+0000","last_became_peered":"2026-03-08T23:03:07.150221+0000","last_unstale":"2026-03-08T23:03:37.706442+0000","last_undegraded":"2026-03-08T23:03:37.706442+0000","last_fullsized":"2026-03-08T23:03:37.706442+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:29:03.387632+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,7,1],"acting":[5,7,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.17","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703388+0000","last_change":"2026-03-08T23:03:05.262601+0000","last_active":"2026-03-08T23:03:37.703388+0000","last_peered":"2026-03-08T23:03:37.703388+0000","last_clean":"2026-03-08T23:03:37.703388+0000","last_became_active":"2026-03-08T23:03:05.262123+0000","last_became_peered":"2026-03-08T23:03:05.262123+0000","last_unstale":"2026-03-08T23:03:37.703388+0000","last_undegraded":"2026-03-08T23:03:37.703388+0000","last_fullsized":"2026-03-08T23:03:37.703388+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:24:01.937092+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,5,2],"acting":[6,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"5.10","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704679+0000","last_change":"2026-03-08T23:03:09.270049+0000","last_active":"2026-03-08T23:03:37.704679+0000","last_peered":"2026-03-08T23:03:37.704679+0000","last_clean":"2026-03-08T23:03:37.704679+0000","last_became_active":"2026-03-08T23:03:09.269942+0000","last_became_peered":"2026-03-08T23:03:09.269942+0000","last_unstale":"2026-03-08T23:03:37.704679+0000","last_undegraded":"2026-03-08T23:03:37.704679+0000","last_fullsized":"2026-03-08T23:03:37.704679+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:50:18.154003+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,4,6],"acting":[7,4,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"6.13","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705328+0000","last_change":"2026-03-08T23:03:11.571867+0000","last_active":"2026-03-08T23:03:37.705328+0000","last_peered":"2026-03-08T23:03:37.705328+0000","last_clean":"2026-03-08T23:03:37.705328+0000","last_became_active":"2026-03-08T23:03:11.571109+0000","last_became_peered":"2026-03-08T23:03:11.571109+0000","last_unstale":"2026-03-08T23:03:37.705328+0000","last_undegraded":"2026-03-08T23:03:37.705328+0000","last_fullsized":"2026-03-08T23:03:37.705328+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:08:31.446222+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,6],"acting":[3,0,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"6.1c","version":"62'1","reported_seq":23,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703935+0000","last_change":"2026-03-08T23:03:11.310830+0000","last_active":"2026-03-08T23:03:37.703935+0000","last_peered":"2026-03-08T23:03:37.703935+0000","last_clean":"2026-03-08T23:03:37.703935+0000","last_became_active":"2026-03-08T23:03:11.310516+0000","last_became_peered":"2026-03-08T23:03:11.310516+0000","last_unstale":"2026-03-08T23:03:37.703935+0000","last_undegraded":"2026-03-08T23:03:37.703935+0000","last_fullsized":"2026-03-08T23:03:37.703935+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:43:55.830970+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":403,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":2,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[7,5,2],"acting":[7,5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":7,"acting_primary":7,"purged_snaps":[]},{"pgid":"3.19","version":"62'15","reported_seq":54,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704357+0000","last_change":"2026-03-08T23:03:07.146617+0000","last_active":"2026-03-08T23:03:37.704357+0000","last_peered":"2026-03-08T23:03:37.704357+0000","last_clean":"2026-03-08T23:03:37.704357+0000","last_became_active":"2026-03-08T23:03:07.146465+0000","last_became_peered":"2026-03-08T23:03:07.146465+0000","last_unstale":"2026-03-08T23:03:37.704357+0000","last_undegraded":"2026-03-08T23:03:37.704357+0000","last_fullsized":"2026-03-08T23:03:37.704357+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":15,"log_dups_size":0,"ondisk_log_size":15,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:36:47.927560+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":7,"num_object_clones":0,"num_object_copies":21,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":7,"num_whiteouts":0,"num_read":26,"num_read_kb":17,"num_write":16,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,3,4],"acting":[1,3,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.18","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706574+0000","last_change":"2026-03-08T23:03:05.266532+0000","last_active":"2026-03-08T23:03:37.706574+0000","last_peered":"2026-03-08T23:03:37.706574+0000","last_clean":"2026-03-08T23:03:37.706574+0000","last_became_active":"2026-03-08T23:03:05.266451+0000","last_became_peered":"2026-03-08T23:03:05.266451+0000","last_unstale":"2026-03-08T23:03:37.706574+0000","last_undegraded":"2026-03-08T23:03:37.706574+0000","last_fullsized":"2026-03-08T23:03:37.706574+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:11:04.688704+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,4,7],"acting":[5,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"5.1f","version":"63'11","reported_seq":55,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:04:20.396149+0000","last_change":"2026-03-08T23:03:09.268911+0000","last_active":"2026-03-08T23:04:20.396149+0000","last_peered":"2026-03-08T23:04:20.396149+0000","last_clean":"2026-03-08T23:04:20.396149+0000","last_became_active":"2026-03-08T23:03:09.268820+0000","last_became_peered":"2026-03-08T23:03:09.268820+0000","last_unstale":"2026-03-08T23:04:20.396149+0000","last_undegraded":"2026-03-08T23:04:20.396149+0000","last_fullsized":"2026-03-08T23:04:20.396149+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":11,"log_dups_size":0,"ondisk_log_size":11,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:13:46.772858+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"6.1d","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.704563+0000","last_change":"2026-03-08T23:03:11.309735+0000","last_active":"2026-03-08T23:03:37.704563+0000","last_peered":"2026-03-08T23:03:37.704563+0000","last_clean":"2026-03-08T23:03:37.704563+0000","last_became_active":"2026-03-08T23:03:11.309608+0000","last_became_peered":"2026-03-08T23:03:11.309608+0000","last_unstale":"2026-03-08T23:03:37.704563+0000","last_undegraded":"2026-03-08T23:03:37.704563+0000","last_fullsized":"2026-03-08T23:03:37.704563+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T08:13:35.136750+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,5,4],"acting":[1,5,4],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"3.18","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706064+0000","last_change":"2026-03-08T23:03:07.142109+0000","last_active":"2026-03-08T23:03:37.706064+0000","last_peered":"2026-03-08T23:03:37.706064+0000","last_clean":"2026-03-08T23:03:37.706064+0000","last_became_active":"2026-03-08T23:03:07.141998+0000","last_became_peered":"2026-03-08T23:03:07.141998+0000","last_unstale":"2026-03-08T23:03:37.706064+0000","last_undegraded":"2026-03-08T23:03:37.706064+0000","last_fullsized":"2026-03-08T23:03:37.706064+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T05:33:03.676215+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0,1],"acting":[3,0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.19","version":"55'1","reported_seq":34,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.705997+0000","last_change":"2026-03-08T23:03:05.302921+0000","last_active":"2026-03-08T23:03:37.705997+0000","last_peered":"2026-03-08T23:03:37.705997+0000","last_clean":"2026-03-08T23:03:37.705997+0000","last_became_active":"2026-03-08T23:03:05.302815+0000","last_became_peered":"2026-03-08T23:03:05.302815+0000","last_unstale":"2026-03-08T23:03:37.705997+0000","last_undegraded":"2026-03-08T23:03:37.705997+0000","last_fullsized":"2026-03-08T23:03:37.705997+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:34:48.105325+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":46,"num_objects":1,"num_object_clones":0,"num_object_copies":3,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":1,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,0],"acting":[3,6,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"5.1e","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.603431+0000","last_change":"2026-03-08T23:03:09.275586+0000","last_active":"2026-03-08T23:03:37.603431+0000","last_peered":"2026-03-08T23:03:37.603431+0000","last_clean":"2026-03-08T23:03:37.603431+0000","last_became_active":"2026-03-08T23:03:09.274433+0000","last_became_peered":"2026-03-08T23:03:09.274433+0000","last_unstale":"2026-03-08T23:03:37.603431+0000","last_undegraded":"2026-03-08T23:03:37.603431+0000","last_fullsized":"2026-03-08T23:03:37.603431+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:29:52.888855+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,7,2],"acting":[0,7,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"6.1e","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.598555+0000","last_change":"2026-03-08T23:03:11.568225+0000","last_active":"2026-03-08T23:03:37.598555+0000","last_peered":"2026-03-08T23:03:37.598555+0000","last_clean":"2026-03-08T23:03:37.598555+0000","last_became_active":"2026-03-08T23:03:11.567909+0000","last_became_peered":"2026-03-08T23:03:11.567909+0000","last_unstale":"2026-03-08T23:03:37.598555+0000","last_undegraded":"2026-03-08T23:03:37.598555+0000","last_fullsized":"2026-03-08T23:03:37.598555+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T04:48:59.439216+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,6,5],"acting":[4,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.1a","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703318+0000","last_change":"2026-03-08T23:03:05.267034+0000","last_active":"2026-03-08T23:03:37.703318+0000","last_peered":"2026-03-08T23:03:37.703318+0000","last_clean":"2026-03-08T23:03:37.703318+0000","last_became_active":"2026-03-08T23:03:05.266677+0000","last_became_peered":"2026-03-08T23:03:05.266677+0000","last_unstale":"2026-03-08T23:03:37.703318+0000","last_undegraded":"2026-03-08T23:03:37.703318+0000","last_fullsized":"2026-03-08T23:03:37.703318+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:26:45.992415+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[6,4,7],"acting":[6,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":6,"acting_primary":6,"purged_snaps":[]},{"pgid":"3.1b","version":"62'5","reported_seq":39,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.605260+0000","last_change":"2026-03-08T23:03:07.149451+0000","last_active":"2026-03-08T23:03:37.605260+0000","last_peered":"2026-03-08T23:03:37.605260+0000","last_clean":"2026-03-08T23:03:37.605260+0000","last_became_active":"2026-03-08T23:03:07.149292+0000","last_became_peered":"2026-03-08T23:03:07.149292+0000","last_unstale":"2026-03-08T23:03:37.605260+0000","last_undegraded":"2026-03-08T23:03:37.605260+0000","last_fullsized":"2026-03-08T23:03:37.605260+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":5,"log_dups_size":0,"ondisk_log_size":5,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-09T23:51:59.467893+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":11,"num_read_kb":7,"num_write":6,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,4,7],"acting":[0,4,7],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[]},{"pgid":"5.1d","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.703447+0000","last_change":"2026-03-08T23:03:09.259503+0000","last_active":"2026-03-08T23:03:37.703447+0000","last_peered":"2026-03-08T23:03:37.703447+0000","last_clean":"2026-03-08T23:03:37.703447+0000","last_became_active":"2026-03-08T23:03:09.259370+0000","last_became_peered":"2026-03-08T23:03:09.259370+0000","last_unstale":"2026-03-08T23:03:37.703447+0000","last_undegraded":"2026-03-08T23:03:37.703447+0000","last_fullsized":"2026-03-08T23:03:37.703447+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T01:11:01.394256+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,4,0],"acting":[1,4,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"6.1f","version":"0'0","reported_seq":21,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706072+0000","last_change":"2026-03-08T23:03:11.572471+0000","last_active":"2026-03-08T23:03:37.706072+0000","last_peered":"2026-03-08T23:03:37.706072+0000","last_clean":"2026-03-08T23:03:37.706072+0000","last_became_active":"2026-03-08T23:03:11.572080+0000","last_became_peered":"2026-03-08T23:03:11.572080+0000","last_unstale":"2026-03-08T23:03:37.706072+0000","last_undegraded":"2026-03-08T23:03:37.706072+0000","last_fullsized":"2026-03-08T23:03:37.706072+0000","mapping_epoch":60,"log_start":"0'0","ondisk_log_start":"0'0","created":60,"last_epoch_clean":61,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:10.242477+0000","last_clean_scrub_stamp":"2026-03-08T23:03:10.242477+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T00:49:11.195104+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,6,5],"acting":[3,6,5],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.1b","version":"0'0","reported_seq":33,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.706124+0000","last_change":"2026-03-08T23:03:05.259076+0000","last_active":"2026-03-08T23:03:37.706124+0000","last_peered":"2026-03-08T23:03:37.706124+0000","last_clean":"2026-03-08T23:03:37.706124+0000","last_became_active":"2026-03-08T23:03:05.258954+0000","last_became_peered":"2026-03-08T23:03:05.258954+0000","last_unstale":"2026-03-08T23:03:37.706124+0000","last_undegraded":"2026-03-08T23:03:37.706124+0000","last_fullsized":"2026-03-08T23:03:37.706124+0000","mapping_epoch":54,"log_start":"0'0","ondisk_log_start":"0'0","created":54,"last_epoch_clean":55,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:03.949608+0000","last_clean_scrub_stamp":"2026-03-08T23:03:03.949608+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T07:39:00.801184+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,7,6],"acting":[3,7,6],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"3.1a","version":"62'9","reported_seq":45,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599249+0000","last_change":"2026-03-08T23:03:07.147186+0000","last_active":"2026-03-08T23:03:37.599249+0000","last_peered":"2026-03-08T23:03:37.599249+0000","last_clean":"2026-03-08T23:03:37.599249+0000","last_became_active":"2026-03-08T23:03:07.146718+0000","last_became_peered":"2026-03-08T23:03:07.146718+0000","last_unstale":"2026-03-08T23:03:37.599249+0000","last_undegraded":"2026-03-08T23:03:37.599249+0000","last_fullsized":"2026-03-08T23:03:37.599249+0000","mapping_epoch":56,"log_start":"0'0","ondisk_log_start":"0'0","created":56,"last_epoch_clean":57,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:06.109853+0000","last_clean_scrub_stamp":"2026-03-08T23:03:06.109853+0000","objects_scrubbed":0,"log_size":9,"log_dups_size":0,"ondisk_log_size":9,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T03:46:42.426757+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":110,"num_objects":4,"num_object_clones":0,"num_object_copies":12,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":17,"num_read_kb":11,"num_write":10,"num_write_kb":1,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,1,2],"acting":[4,1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"5.1c","version":"0'0","reported_seq":25,"reported_epoch":66,"state":"active+clean","last_fresh":"2026-03-08T23:03:37.599238+0000","last_change":"2026-03-08T23:03:09.262136+0000","last_active":"2026-03-08T23:03:37.599238+0000","last_peered":"2026-03-08T23:03:37.599238+0000","last_clean":"2026-03-08T23:03:37.599238+0000","last_became_active":"2026-03-08T23:03:09.262055+0000","last_became_peered":"2026-03-08T23:03:09.262055+0000","last_unstale":"2026-03-08T23:03:37.599238+0000","last_undegraded":"2026-03-08T23:03:37.599238+0000","last_fullsized":"2026-03-08T23:03:37.599238+0000","mapping_epoch":58,"log_start":"0'0","ondisk_log_start":"0'0","created":58,"last_epoch_clean":59,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-08T23:03:08.120926+0000","last_clean_scrub_stamp":"2026-03-08T23:03:08.120926+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-10T02:33:25.062434+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,3,2],"acting":[4,3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":6,"num_pg":32,"stat_sum":{"num_bytes":416,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":3,"num_read_kb":3,"num_write":3,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1248,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":2,"ondisk_log_size":2,"up":96,"acting":96,"num_store_stats":8},{"poolid":5,"num_pg":32,"stat_sum":{"num_bytes":0,"num_objects":8,"num_object_clones":0,"num_object_copies":24,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":8,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":88,"ondisk_log_size":88,"up":96,"acting":96,"num_store_stats":8},{"poolid":4,"num_pg":3,"stat_sum":{"num_bytes":408,"num_objects":3,"num_object_clones":0,"num_object_copies":9,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":3,"num_whiteouts":0,"num_read":73,"num_read_kb":68,"num_write":6,"num_write_kb":3,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":24576,"data_stored":1224,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":8,"ondisk_log_size":8,"up":9,"acting":9,"num_store_stats":7},{"poolid":3,"num_pg":32,"stat_sum":{"num_bytes":3702,"num_objects":178,"num_object_clones":0,"num_object_copies":534,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":178,"num_whiteouts":0,"num_read":701,"num_read_kb":458,"num_write":417,"num_write_kb":34,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":417792,"data_stored":11106,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":395,"ondisk_log_size":395,"up":96,"acting":96,"num_store_stats":8},{"poolid":2,"num_pg":32,"stat_sum":{"num_bytes":1613,"num_objects":6,"num_object_clones":0,"num_object_copies":18,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":6,"num_whiteouts":0,"num_read":34,"num_read_kb":34,"num_write":10,"num_write_kb":6,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":4839,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":6,"ondisk_log_size":6,"up":96,"acting":96,"num_store_stats":8},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":106,"num_read_kb":213,"num_write":69,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":2,"num_bytes_recovered":459280,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":7,"up_from":51,"seq":219043332119,"num_pgs":60,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27952,"kb_used_data":1120,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939472,"statfs":{"total":21470642176,"available":21442019328,"internally_reserved":0,"allocated":1146880,"data_stored":712953,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1584,"internal_metadata":27458000},"hb_peers":[0,1,2,3,4,5,6],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":6,"up_from":44,"seq":188978561054,"num_pgs":42,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27928,"kb_used_data":1096,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939496,"statfs":{"total":21470642176,"available":21442043904,"internally_reserved":0,"allocated":1122304,"data_stored":712604,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1589,"internal_metadata":27457995},"hb_peers":[0,1,2,3,4,5,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":5,"up_from":38,"seq":163208757285,"num_pgs":53,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":644,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":659456,"data_stored":253713,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,4,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":32,"seq":137438953517,"num_pgs":56,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27516,"kb_used_data":680,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939908,"statfs":{"total":21470642176,"available":21442465792,"internally_reserved":0,"allocated":696320,"data_stored":253699,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,2,3,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":26,"seq":111669149747,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27484,"kb_used_data":648,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939940,"statfs":{"total":21470642176,"available":21442498560,"internally_reserved":0,"allocated":663552,"data_stored":254147,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1587,"internal_metadata":27457997},"hb_peers":[0,1,2,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":18,"seq":77309411386,"num_pgs":38,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27480,"kb_used_data":640,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939944,"statfs":{"total":21470642176,"available":21442502656,"internally_reserved":0,"allocated":655360,"data_stored":252811,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574913,"num_pgs":47,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27492,"kb_used_data":652,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939932,"statfs":{"total":21470642176,"available":21442490368,"internally_reserved":0,"allocated":667648,"data_stored":252649,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738439,"num_pgs":50,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27944,"kb_used_data":1108,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939480,"statfs":{"total":21470642176,"available":21442027520,"internally_reserved":0,"allocated":1134592,"data_stored":712689,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2,3,4,5,6,7],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":46,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1475,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":16384,"data_stored":574,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":993,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":12288,"data_stored":1085,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":92,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1320,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":57344,"data_stored":1458,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":49152,"data_stored":1282,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1144,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":73728,"data_stored":1980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":45056,"data_stored":1172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":40960,"data_stored":1100,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":3,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":61440,"data_stored":1650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":4,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":389,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":5,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":403,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":6,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":13,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":6,"osd":7,"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":416,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-08T23:04:33.568 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-08T23:04:33.568 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-08T23:04:33.568 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-08T23:04:33.568 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph health --format=json 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:34 vm06 bash[20625]: audit 2026-03-08T23:04:33.212815+0000 mgr.y (mgr.24419) 68 : audit [DBG] from='client.24563 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:34 vm06 bash[20625]: audit 2026-03-08T23:04:33.212815+0000 mgr.y (mgr.24419) 68 : audit [DBG] from='client.24563 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:34 vm06 bash[20625]: cluster 2026-03-08T23:04:33.634484+0000 mgr.y (mgr.24419) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:34 vm06 bash[20625]: cluster 2026-03-08T23:04:33.634484+0000 mgr.y (mgr.24419) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:34 vm06 bash[27746]: audit 2026-03-08T23:04:33.212815+0000 mgr.y (mgr.24419) 68 : audit [DBG] from='client.24563 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:34 vm06 bash[27746]: audit 2026-03-08T23:04:33.212815+0000 mgr.y (mgr.24419) 68 : audit [DBG] from='client.24563 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:34 vm06 bash[27746]: cluster 2026-03-08T23:04:33.634484+0000 mgr.y (mgr.24419) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:34 vm06 bash[27746]: cluster 2026-03-08T23:04:33.634484+0000 mgr.y (mgr.24419) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:35.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:34 vm11 bash[23232]: audit 2026-03-08T23:04:33.212815+0000 mgr.y (mgr.24419) 68 : audit [DBG] from='client.24563 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:35.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:34 vm11 bash[23232]: audit 2026-03-08T23:04:33.212815+0000 mgr.y (mgr.24419) 68 : audit [DBG] from='client.24563 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-08T23:04:35.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:34 vm11 bash[23232]: cluster 2026-03-08T23:04:33.634484+0000 mgr.y (mgr.24419) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:35.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:34 vm11 bash[23232]: cluster 2026-03-08T23:04:33.634484+0000 mgr.y (mgr.24419) 69 : cluster [DBG] pgmap v30: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:36 vm06 bash[20625]: cluster 2026-03-08T23:04:35.634919+0000 mgr.y (mgr.24419) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:36 vm06 bash[20625]: cluster 2026-03-08T23:04:35.634919+0000 mgr.y (mgr.24419) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:36 vm06 bash[27746]: cluster 2026-03-08T23:04:35.634919+0000 mgr.y (mgr.24419) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:36 vm06 bash[27746]: cluster 2026-03-08T23:04:35.634919+0000 mgr.y (mgr.24419) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:37.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:36 vm11 bash[23232]: cluster 2026-03-08T23:04:35.634919+0000 mgr.y (mgr.24419) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:37.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:36 vm11 bash[23232]: cluster 2026-03-08T23:04:35.634919+0000 mgr.y (mgr.24419) 70 : cluster [DBG] pgmap v31: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:38.244 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:37 vm06 bash[20625]: audit 2026-03-08T23:04:37.637758+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:37 vm06 bash[20625]: audit 2026-03-08T23:04:37.637758+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:37 vm06 bash[20625]: audit 2026-03-08T23:04:37.638026+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:37 vm06 bash[20625]: audit 2026-03-08T23:04:37.638026+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:37 vm06 bash[20625]: audit 2026-03-08T23:04:37.746640+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:37 vm06 bash[20625]: audit 2026-03-08T23:04:37.746640+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:37 vm06 bash[27746]: audit 2026-03-08T23:04:37.637758+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:37 vm06 bash[27746]: audit 2026-03-08T23:04:37.637758+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:37 vm06 bash[27746]: audit 2026-03-08T23:04:37.638026+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:37 vm06 bash[27746]: audit 2026-03-08T23:04:37.638026+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:37 vm06 bash[27746]: audit 2026-03-08T23:04:37.746640+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:37 vm06 bash[27746]: audit 2026-03-08T23:04:37.746640+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:37 vm11 bash[23232]: audit 2026-03-08T23:04:37.637758+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:37 vm11 bash[23232]: audit 2026-03-08T23:04:37.637758+0000 mon.c (mon.2) 62 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:37 vm11 bash[23232]: audit 2026-03-08T23:04:37.638026+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:37 vm11 bash[23232]: audit 2026-03-08T23:04:37.638026+0000 mon.a (mon.0) 798 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]: dispatch 2026-03-08T23:04:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:37 vm11 bash[23232]: audit 2026-03-08T23:04:37.746640+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:37 vm11 bash[23232]: audit 2026-03-08T23:04:37.746640+0000 mon.c (mon.2) 63 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:38.612 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-08T23:04:38.612 INFO:teuthology.orchestra.run.vm06.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-08T23:04:38.677 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-08T23:04:38.677 INFO:tasks.cephadm:Setup complete, yielding 2026-03-08T23:04:38.677 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-08T23:04:38.679 INFO:tasks.cephadm:Running commands on role mon.a host ubuntu@vm06.local 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- bash -c 'set -ex 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> do 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> echo "rotating key for $f" 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> K=$(ceph auth get-key $f) 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> NK="$K" 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> ceph orch daemon rotate-key $f 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> while [ "$K" == "$NK" ]; do 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> sleep 5 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> NK=$(ceph auth get-key $f) 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> done 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> done 2026-03-08T23:04:38.680 DEBUG:teuthology.orchestra.run.vm06:> ' 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: cluster 2026-03-08T23:04:37.635220+0000 mgr.y (mgr.24419) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: cluster 2026-03-08T23:04:37.635220+0000 mgr.y (mgr.24419) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: audit 2026-03-08T23:04:37.991265+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]': finished 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: audit 2026-03-08T23:04:37.991265+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]': finished 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: cluster 2026-03-08T23:04:37.999426+0000 mon.a (mon.0) 800 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: cluster 2026-03-08T23:04:37.999426+0000 mon.a (mon.0) 800 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: audit 2026-03-08T23:04:38.613283+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.106:0/240998843' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:39 vm06 bash[20625]: audit 2026-03-08T23:04:38.613283+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.106:0/240998843' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:04:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: cluster 2026-03-08T23:04:37.635220+0000 mgr.y (mgr.24419) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: cluster 2026-03-08T23:04:37.635220+0000 mgr.y (mgr.24419) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: audit 2026-03-08T23:04:37.991265+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]': finished 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: audit 2026-03-08T23:04:37.991265+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]': finished 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: cluster 2026-03-08T23:04:37.999426+0000 mon.a (mon.0) 800 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: cluster 2026-03-08T23:04:37.999426+0000 mon.a (mon.0) 800 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: audit 2026-03-08T23:04:38.613283+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.106:0/240998843' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:04:39.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:39 vm06 bash[27746]: audit 2026-03-08T23:04:38.613283+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.106:0/240998843' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: cluster 2026-03-08T23:04:37.635220+0000 mgr.y (mgr.24419) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: cluster 2026-03-08T23:04:37.635220+0000 mgr.y (mgr.24419) 71 : cluster [DBG] pgmap v32: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: audit 2026-03-08T23:04:37.991265+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]': finished 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: audit 2026-03-08T23:04:37.991265+0000 mon.a (mon.0) 799 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "2.12", "id": [7, 2]}]': finished 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: cluster 2026-03-08T23:04:37.999426+0000 mon.a (mon.0) 800 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: cluster 2026-03-08T23:04:37.999426+0000 mon.a (mon.0) 800 : cluster [DBG] osdmap e67: 8 total, 8 up, 8 in 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: audit 2026-03-08T23:04:38.613283+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.106:0/240998843' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:04:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:39 vm11 bash[23232]: audit 2026-03-08T23:04:38.613283+0000 mon.c (mon.2) 64 : audit [DBG] from='client.? 192.168.123.106:0/240998843' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-08T23:04:40.046 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:04:39 vm11 bash[51186]: logger=infra.usagestats t=2026-03-08T23:04:39.751836377Z level=info msg="Usage stats are ready to report" 2026-03-08T23:04:40.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:40 vm11 bash[23232]: cluster 2026-03-08T23:04:39.052099+0000 mon.a (mon.0) 801 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-08T23:04:40.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:40 vm11 bash[23232]: cluster 2026-03-08T23:04:39.052099+0000 mon.a (mon.0) 801 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-08T23:04:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:40 vm06 bash[20625]: cluster 2026-03-08T23:04:39.052099+0000 mon.a (mon.0) 801 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-08T23:04:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:40 vm06 bash[20625]: cluster 2026-03-08T23:04:39.052099+0000 mon.a (mon.0) 801 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-08T23:04:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:40 vm06 bash[27746]: cluster 2026-03-08T23:04:39.052099+0000 mon.a (mon.0) 801 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-08T23:04:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:40 vm06 bash[27746]: cluster 2026-03-08T23:04:39.052099+0000 mon.a (mon.0) 801 : cluster [DBG] osdmap e68: 8 total, 8 up, 8 in 2026-03-08T23:04:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:04:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:04:41.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:41 vm11 bash[23232]: cluster 2026-03-08T23:04:39.635470+0000 mgr.y (mgr.24419) 72 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:41.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:41 vm11 bash[23232]: cluster 2026-03-08T23:04:39.635470+0000 mgr.y (mgr.24419) 72 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:41 vm06 bash[20625]: cluster 2026-03-08T23:04:39.635470+0000 mgr.y (mgr.24419) 72 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:41 vm06 bash[20625]: cluster 2026-03-08T23:04:39.635470+0000 mgr.y (mgr.24419) 72 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:41 vm06 bash[27746]: cluster 2026-03-08T23:04:39.635470+0000 mgr.y (mgr.24419) 72 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:41 vm06 bash[27746]: cluster 2026-03-08T23:04:39.635470+0000 mgr.y (mgr.24419) 72 : cluster [DBG] pgmap v35: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:42.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:04:43.317 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:04:43.330 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:43 vm06 bash[27746]: cluster 2026-03-08T23:04:41.636092+0000 mgr.y (mgr.24419) 73 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:43.330 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:43 vm06 bash[27746]: cluster 2026-03-08T23:04:41.636092+0000 mgr.y (mgr.24419) 73 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:43.330 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:43 vm06 bash[27746]: audit 2026-03-08T23:04:42.085078+0000 mgr.y (mgr.24419) 74 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:43.330 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:43 vm06 bash[27746]: audit 2026-03-08T23:04:42.085078+0000 mgr.y (mgr.24419) 74 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:43.332 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:43 vm06 bash[20625]: cluster 2026-03-08T23:04:41.636092+0000 mgr.y (mgr.24419) 73 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:43.332 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:43 vm06 bash[20625]: cluster 2026-03-08T23:04:41.636092+0000 mgr.y (mgr.24419) 73 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:43.332 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:43 vm06 bash[20625]: audit 2026-03-08T23:04:42.085078+0000 mgr.y (mgr.24419) 74 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:43.332 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:43 vm06 bash[20625]: audit 2026-03-08T23:04:42.085078+0000 mgr.y (mgr.24419) 74 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:43.514 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.0 2026-03-08T23:04:43.515 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:04:43.515 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.0' 2026-03-08T23:04:43.515 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:04:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:43 vm11 bash[23232]: cluster 2026-03-08T23:04:41.636092+0000 mgr.y (mgr.24419) 73 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:43 vm11 bash[23232]: cluster 2026-03-08T23:04:41.636092+0000 mgr.y (mgr.24419) 73 : cluster [DBG] pgmap v36: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:43 vm11 bash[23232]: audit 2026-03-08T23:04:42.085078+0000 mgr.y (mgr.24419) 74 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:43 vm11 bash[23232]: audit 2026-03-08T23:04:42.085078+0000 mgr.y (mgr.24419) 74 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:43.712 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:04:43.712 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:04:43.712 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.0 2026-03-08T23:04:43.891 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.0 on host 'vm06' 2026-03-08T23:04:43.919 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:04:43.919 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.703224+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.106:0/365207475' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.703224+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.106:0/365207475' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.881196+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.881196+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.890623+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.890623+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.893142+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:44 vm06 bash[20625]: audit 2026-03-08T23:04:43.893142+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:44.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.703224+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.106:0/365207475' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.703224+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.106:0/365207475' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.881196+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.881196+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.890623+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.890623+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.893142+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:44.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:44 vm06 bash[27746]: audit 2026-03-08T23:04:43.893142+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.703224+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.106:0/365207475' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.703224+0000 mon.c (mon.2) 65 : audit [INF] from='client.? 192.168.123.106:0/365207475' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.881196+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.881196+0000 mon.a (mon.0) 802 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.890623+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.890623+0000 mon.a (mon.0) 803 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.893142+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:44.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:44 vm11 bash[23232]: audit 2026-03-08T23:04:43.893142+0000 mon.c (mon.2) 66 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:04:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:45 vm06 bash[20625]: cluster 2026-03-08T23:04:43.636358+0000 mgr.y (mgr.24419) 75 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:45 vm06 bash[20625]: cluster 2026-03-08T23:04:43.636358+0000 mgr.y (mgr.24419) 75 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:45 vm06 bash[20625]: audit 2026-03-08T23:04:43.873363+0000 mgr.y (mgr.24419) 76 : audit [DBG] from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:04:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:45 vm06 bash[20625]: audit 2026-03-08T23:04:43.873363+0000 mgr.y (mgr.24419) 76 : audit [DBG] from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:04:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:45 vm06 bash[20625]: cephadm 2026-03-08T23:04:43.873759+0000 mgr.y (mgr.24419) 77 : cephadm [INF] Schedule rotate-key daemon osd.0 2026-03-08T23:04:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:45 vm06 bash[20625]: cephadm 2026-03-08T23:04:43.873759+0000 mgr.y (mgr.24419) 77 : cephadm [INF] Schedule rotate-key daemon osd.0 2026-03-08T23:04:45.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:45 vm06 bash[27746]: cluster 2026-03-08T23:04:43.636358+0000 mgr.y (mgr.24419) 75 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:45.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:45 vm06 bash[27746]: cluster 2026-03-08T23:04:43.636358+0000 mgr.y (mgr.24419) 75 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:45.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:45 vm06 bash[27746]: audit 2026-03-08T23:04:43.873363+0000 mgr.y (mgr.24419) 76 : audit [DBG] from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:04:45.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:45 vm06 bash[27746]: audit 2026-03-08T23:04:43.873363+0000 mgr.y (mgr.24419) 76 : audit [DBG] from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:04:45.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:45 vm06 bash[27746]: cephadm 2026-03-08T23:04:43.873759+0000 mgr.y (mgr.24419) 77 : cephadm [INF] Schedule rotate-key daemon osd.0 2026-03-08T23:04:45.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:45 vm06 bash[27746]: cephadm 2026-03-08T23:04:43.873759+0000 mgr.y (mgr.24419) 77 : cephadm [INF] Schedule rotate-key daemon osd.0 2026-03-08T23:04:45.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:45 vm11 bash[23232]: cluster 2026-03-08T23:04:43.636358+0000 mgr.y (mgr.24419) 75 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:45.572 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:45 vm11 bash[23232]: cluster 2026-03-08T23:04:43.636358+0000 mgr.y (mgr.24419) 75 : cluster [DBG] pgmap v37: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 639 B/s rd, 0 op/s 2026-03-08T23:04:45.572 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:45 vm11 bash[23232]: audit 2026-03-08T23:04:43.873363+0000 mgr.y (mgr.24419) 76 : audit [DBG] from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:04:45.572 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:45 vm11 bash[23232]: audit 2026-03-08T23:04:43.873363+0000 mgr.y (mgr.24419) 76 : audit [DBG] from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.0", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:04:45.572 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:45 vm11 bash[23232]: cephadm 2026-03-08T23:04:43.873759+0000 mgr.y (mgr.24419) 77 : cephadm [INF] Schedule rotate-key daemon osd.0 2026-03-08T23:04:45.572 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:45 vm11 bash[23232]: cephadm 2026-03-08T23:04:43.873759+0000 mgr.y (mgr.24419) 77 : cephadm [INF] Schedule rotate-key daemon osd.0 2026-03-08T23:04:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:47 vm06 bash[20625]: cluster 2026-03-08T23:04:45.636904+0000 mgr.y (mgr.24419) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:47 vm06 bash[20625]: cluster 2026-03-08T23:04:45.636904+0000 mgr.y (mgr.24419) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:47 vm06 bash[27746]: cluster 2026-03-08T23:04:45.636904+0000 mgr.y (mgr.24419) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:47 vm06 bash[27746]: cluster 2026-03-08T23:04:45.636904+0000 mgr.y (mgr.24419) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:47 vm11 bash[23232]: cluster 2026-03-08T23:04:45.636904+0000 mgr.y (mgr.24419) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:47.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:47 vm11 bash[23232]: cluster 2026-03-08T23:04:45.636904+0000 mgr.y (mgr.24419) 78 : cluster [DBG] pgmap v38: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:48.924 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:04:49.192 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:04:49.192 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:04:49.192 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:04:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:49 vm06 bash[20625]: cluster 2026-03-08T23:04:47.637163+0000 mgr.y (mgr.24419) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-08T23:04:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:49 vm06 bash[20625]: cluster 2026-03-08T23:04:47.637163+0000 mgr.y (mgr.24419) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-08T23:04:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:49 vm06 bash[27746]: cluster 2026-03-08T23:04:47.637163+0000 mgr.y (mgr.24419) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-08T23:04:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:49 vm06 bash[27746]: cluster 2026-03-08T23:04:47.637163+0000 mgr.y (mgr.24419) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-08T23:04:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:49 vm11 bash[23232]: cluster 2026-03-08T23:04:47.637163+0000 mgr.y (mgr.24419) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-08T23:04:49.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:49 vm11 bash[23232]: cluster 2026-03-08T23:04:47.637163+0000 mgr.y (mgr.24419) 79 : cluster [DBG] pgmap v39: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.0 KiB/s rd, 1 op/s 2026-03-08T23:04:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.115772+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.115772+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.135213+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.135213+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.180392+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.106:0/2684086741' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.180392+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.106:0/2684086741' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.314129+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.314129+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.330099+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.330099+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.661761+0000 mon.c (mon.2) 67 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.661761+0000 mon.c (mon.2) 67 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.663175+0000 mon.c (mon.2) 68 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.663175+0000 mon.c (mon.2) 68 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.675441+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.675441+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.690976+0000 mon.c (mon.2) 69 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.690976+0000 mon.c (mon.2) 69 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.691616+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.691616+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.695433+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]': finished 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:50 vm06 bash[20625]: audit 2026-03-08T23:04:49.695433+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]': finished 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.115772+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.115772+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.135213+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.135213+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.180392+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.106:0/2684086741' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.180392+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.106:0/2684086741' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.314129+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.314129+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.330099+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.330099+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.661761+0000 mon.c (mon.2) 67 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.661761+0000 mon.c (mon.2) 67 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.663175+0000 mon.c (mon.2) 68 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.663175+0000 mon.c (mon.2) 68 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.675441+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.675441+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.690976+0000 mon.c (mon.2) 69 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.690976+0000 mon.c (mon.2) 69 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.691616+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.691616+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.695433+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]': finished 2026-03-08T23:04:50.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:50 vm06 bash[27746]: audit 2026-03-08T23:04:49.695433+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]': finished 2026-03-08T23:04:50.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.115772+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.115772+0000 mon.a (mon.0) 804 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.135213+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.135213+0000 mon.a (mon.0) 805 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.180392+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.106:0/2684086741' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.180392+0000 mon.b (mon.1) 37 : audit [INF] from='client.? 192.168.123.106:0/2684086741' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.314129+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.314129+0000 mon.a (mon.0) 806 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.330099+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.330099+0000 mon.a (mon.0) 807 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.661761+0000 mon.c (mon.2) 67 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.661761+0000 mon.c (mon.2) 67 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.663175+0000 mon.c (mon.2) 68 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.663175+0000 mon.c (mon.2) 68 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.675441+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.675441+0000 mon.a (mon.0) 808 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.690976+0000 mon.c (mon.2) 69 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.690976+0000 mon.c (mon.2) 69 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.691616+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.691616+0000 mon.a (mon.0) 809 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]: dispatch 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.695433+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]': finished 2026-03-08T23:04:50.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:50 vm11 bash[23232]: audit 2026-03-08T23:04:49.695433+0000 mon.a (mon.0) 810 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.0", "format": "json"}]': finished 2026-03-08T23:04:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:04:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:04:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: cluster 2026-03-08T23:04:49.637430+0000 mgr.y (mgr.24419) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 966 B/s rd, 0 op/s 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: cluster 2026-03-08T23:04:49.637430+0000 mgr.y (mgr.24419) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 966 B/s rd, 0 op/s 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: cephadm 2026-03-08T23:04:49.690684+0000 mgr.y (mgr.24419) 81 : cephadm [INF] Rotating authentication key for osd.0 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: cephadm 2026-03-08T23:04:49.690684+0000 mgr.y (mgr.24419) 81 : cephadm [INF] Rotating authentication key for osd.0 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: cephadm 2026-03-08T23:04:49.704820+0000 mgr.y (mgr.24419) 82 : cephadm [INF] Reconfiguring daemon osd.0 on vm06 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: cephadm 2026-03-08T23:04:49.704820+0000 mgr.y (mgr.24419) 82 : cephadm [INF] Reconfiguring daemon osd.0 on vm06 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.150881+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.150881+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.158729+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.158729+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.353535+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.353535+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.361323+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:51 vm06 bash[20625]: audit 2026-03-08T23:04:50.361323+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: cluster 2026-03-08T23:04:49.637430+0000 mgr.y (mgr.24419) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 966 B/s rd, 0 op/s 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: cluster 2026-03-08T23:04:49.637430+0000 mgr.y (mgr.24419) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 966 B/s rd, 0 op/s 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: cephadm 2026-03-08T23:04:49.690684+0000 mgr.y (mgr.24419) 81 : cephadm [INF] Rotating authentication key for osd.0 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: cephadm 2026-03-08T23:04:49.690684+0000 mgr.y (mgr.24419) 81 : cephadm [INF] Rotating authentication key for osd.0 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: cephadm 2026-03-08T23:04:49.704820+0000 mgr.y (mgr.24419) 82 : cephadm [INF] Reconfiguring daemon osd.0 on vm06 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: cephadm 2026-03-08T23:04:49.704820+0000 mgr.y (mgr.24419) 82 : cephadm [INF] Reconfiguring daemon osd.0 on vm06 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.150881+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.150881+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.158729+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.158729+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.353535+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.353535+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.361323+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:51 vm06 bash[27746]: audit 2026-03-08T23:04:50.361323+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: cluster 2026-03-08T23:04:49.637430+0000 mgr.y (mgr.24419) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 966 B/s rd, 0 op/s 2026-03-08T23:04:51.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: cluster 2026-03-08T23:04:49.637430+0000 mgr.y (mgr.24419) 80 : cluster [DBG] pgmap v40: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 966 B/s rd, 0 op/s 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: cephadm 2026-03-08T23:04:49.690684+0000 mgr.y (mgr.24419) 81 : cephadm [INF] Rotating authentication key for osd.0 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: cephadm 2026-03-08T23:04:49.690684+0000 mgr.y (mgr.24419) 81 : cephadm [INF] Rotating authentication key for osd.0 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: cephadm 2026-03-08T23:04:49.704820+0000 mgr.y (mgr.24419) 82 : cephadm [INF] Reconfiguring daemon osd.0 on vm06 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: cephadm 2026-03-08T23:04:49.704820+0000 mgr.y (mgr.24419) 82 : cephadm [INF] Reconfiguring daemon osd.0 on vm06 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.150881+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.150881+0000 mon.a (mon.0) 811 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.158729+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.158729+0000 mon.a (mon.0) 812 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.353535+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.353535+0000 mon.a (mon.0) 813 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.361323+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:51 vm11 bash[23232]: audit 2026-03-08T23:04:50.361323+0000 mon.a (mon.0) 814 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:52 vm06 bash[27746]: cluster 2026-03-08T23:04:51.637957+0000 mgr.y (mgr.24419) 83 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:52 vm06 bash[27746]: cluster 2026-03-08T23:04:51.637957+0000 mgr.y (mgr.24419) 83 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:52 vm06 bash[27746]: audit 2026-03-08T23:04:52.095875+0000 mgr.y (mgr.24419) 84 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:52 vm06 bash[27746]: audit 2026-03-08T23:04:52.095875+0000 mgr.y (mgr.24419) 84 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:52 vm06 bash[20625]: cluster 2026-03-08T23:04:51.637957+0000 mgr.y (mgr.24419) 83 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:52 vm06 bash[20625]: cluster 2026-03-08T23:04:51.637957+0000 mgr.y (mgr.24419) 83 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:52 vm06 bash[20625]: audit 2026-03-08T23:04:52.095875+0000 mgr.y (mgr.24419) 84 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:52.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:52 vm06 bash[20625]: audit 2026-03-08T23:04:52.095875+0000 mgr.y (mgr.24419) 84 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:52.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:04:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:04:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:52 vm11 bash[23232]: cluster 2026-03-08T23:04:51.637957+0000 mgr.y (mgr.24419) 83 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:52 vm11 bash[23232]: cluster 2026-03-08T23:04:51.637957+0000 mgr.y (mgr.24419) 83 : cluster [DBG] pgmap v41: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:52 vm11 bash[23232]: audit 2026-03-08T23:04:52.095875+0000 mgr.y (mgr.24419) 84 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:52 vm11 bash[23232]: audit 2026-03-08T23:04:52.095875+0000 mgr.y (mgr.24419) 84 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:04:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:53 vm06 bash[20625]: audit 2026-03-08T23:04:52.753321+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:53 vm06 bash[20625]: audit 2026-03-08T23:04:52.753321+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:53 vm06 bash[27746]: audit 2026-03-08T23:04:52.753321+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:53 vm06 bash[27746]: audit 2026-03-08T23:04:52.753321+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:53.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:53 vm11 bash[23232]: audit 2026-03-08T23:04:52.753321+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:53.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:53 vm11 bash[23232]: audit 2026-03-08T23:04:52.753321+0000 mon.c (mon.2) 70 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:04:54.193 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:04:54.581 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:04:54.581 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:04:54.581 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:04:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:55 vm06 bash[20625]: cluster 2026-03-08T23:04:53.638281+0000 mgr.y (mgr.24419) 85 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:55 vm06 bash[20625]: cluster 2026-03-08T23:04:53.638281+0000 mgr.y (mgr.24419) 85 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:55 vm06 bash[27746]: cluster 2026-03-08T23:04:53.638281+0000 mgr.y (mgr.24419) 85 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:55 vm06 bash[27746]: cluster 2026-03-08T23:04:53.638281+0000 mgr.y (mgr.24419) 85 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:55 vm11 bash[23232]: cluster 2026-03-08T23:04:53.638281+0000 mgr.y (mgr.24419) 85 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:55 vm11 bash[23232]: cluster 2026-03-08T23:04:53.638281+0000 mgr.y (mgr.24419) 85 : cluster [DBG] pgmap v42: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:56 vm06 bash[20625]: audit 2026-03-08T23:04:54.572004+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.106:0/557526866' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:56 vm06 bash[20625]: audit 2026-03-08T23:04:54.572004+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.106:0/557526866' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:56 vm06 bash[27746]: audit 2026-03-08T23:04:54.572004+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.106:0/557526866' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:56 vm06 bash[27746]: audit 2026-03-08T23:04:54.572004+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.106:0/557526866' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:56 vm11 bash[23232]: audit 2026-03-08T23:04:54.572004+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.106:0/557526866' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:56 vm11 bash[23232]: audit 2026-03-08T23:04:54.572004+0000 mon.a (mon.0) 815 : audit [INF] from='client.? 192.168.123.106:0/557526866' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:04:57.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:57 vm06 bash[20625]: cluster 2026-03-08T23:04:55.638724+0000 mgr.y (mgr.24419) 86 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:57.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:57 vm06 bash[20625]: cluster 2026-03-08T23:04:55.638724+0000 mgr.y (mgr.24419) 86 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:57.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:57 vm06 bash[27746]: cluster 2026-03-08T23:04:55.638724+0000 mgr.y (mgr.24419) 86 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:57.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:57 vm06 bash[27746]: cluster 2026-03-08T23:04:55.638724+0000 mgr.y (mgr.24419) 86 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:57.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:57 vm11 bash[23232]: cluster 2026-03-08T23:04:55.638724+0000 mgr.y (mgr.24419) 86 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:57.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:57 vm11 bash[23232]: cluster 2026-03-08T23:04:55.638724+0000 mgr.y (mgr.24419) 86 : cluster [DBG] pgmap v43: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:04:58.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:58 vm06 bash[20625]: cluster 2026-03-08T23:04:57.639016+0000 mgr.y (mgr.24419) 87 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:58.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:04:58 vm06 bash[20625]: cluster 2026-03-08T23:04:57.639016+0000 mgr.y (mgr.24419) 87 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:58.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:58 vm06 bash[27746]: cluster 2026-03-08T23:04:57.639016+0000 mgr.y (mgr.24419) 87 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:58.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:04:58 vm06 bash[27746]: cluster 2026-03-08T23:04:57.639016+0000 mgr.y (mgr.24419) 87 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:58.807 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:58 vm11 bash[23232]: cluster 2026-03-08T23:04:57.639016+0000 mgr.y (mgr.24419) 87 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:58.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:04:58 vm11 bash[23232]: cluster 2026-03-08T23:04:57.639016+0000 mgr.y (mgr.24419) 87 : cluster [DBG] pgmap v44: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:04:59.582 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:04:59.783 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:04:59.783 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:04:59.783 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:00 vm06 bash[20625]: cluster 2026-03-08T23:04:59.639258+0000 mgr.y (mgr.24419) 88 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:00 vm06 bash[20625]: cluster 2026-03-08T23:04:59.639258+0000 mgr.y (mgr.24419) 88 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:00 vm06 bash[20625]: audit 2026-03-08T23:04:59.773218+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.106:0/1183697096' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:00 vm06 bash[20625]: audit 2026-03-08T23:04:59.773218+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.106:0/1183697096' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:05:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:05:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:00 vm06 bash[27746]: cluster 2026-03-08T23:04:59.639258+0000 mgr.y (mgr.24419) 88 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:00 vm06 bash[27746]: cluster 2026-03-08T23:04:59.639258+0000 mgr.y (mgr.24419) 88 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:00 vm06 bash[27746]: audit 2026-03-08T23:04:59.773218+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.106:0/1183697096' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:00 vm06 bash[27746]: audit 2026-03-08T23:04:59.773218+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.106:0/1183697096' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:00 vm11 bash[23232]: cluster 2026-03-08T23:04:59.639258+0000 mgr.y (mgr.24419) 88 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:00 vm11 bash[23232]: cluster 2026-03-08T23:04:59.639258+0000 mgr.y (mgr.24419) 88 : cluster [DBG] pgmap v45: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:00 vm11 bash[23232]: audit 2026-03-08T23:04:59.773218+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.106:0/1183697096' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:00 vm11 bash[23232]: audit 2026-03-08T23:04:59.773218+0000 mon.c (mon.2) 71 : audit [INF] from='client.? 192.168.123.106:0/1183697096' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:02.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:05:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:05:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:02 vm06 bash[20625]: cluster 2026-03-08T23:05:01.639695+0000 mgr.y (mgr.24419) 89 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:02 vm06 bash[20625]: cluster 2026-03-08T23:05:01.639695+0000 mgr.y (mgr.24419) 89 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:02 vm06 bash[20625]: audit 2026-03-08T23:05:02.106668+0000 mgr.y (mgr.24419) 90 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:02 vm06 bash[20625]: audit 2026-03-08T23:05:02.106668+0000 mgr.y (mgr.24419) 90 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:03.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:02 vm06 bash[27746]: cluster 2026-03-08T23:05:01.639695+0000 mgr.y (mgr.24419) 89 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:03.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:02 vm06 bash[27746]: cluster 2026-03-08T23:05:01.639695+0000 mgr.y (mgr.24419) 89 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:03.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:02 vm06 bash[27746]: audit 2026-03-08T23:05:02.106668+0000 mgr.y (mgr.24419) 90 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:03.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:02 vm06 bash[27746]: audit 2026-03-08T23:05:02.106668+0000 mgr.y (mgr.24419) 90 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:02 vm11 bash[23232]: cluster 2026-03-08T23:05:01.639695+0000 mgr.y (mgr.24419) 89 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:02 vm11 bash[23232]: cluster 2026-03-08T23:05:01.639695+0000 mgr.y (mgr.24419) 89 : cluster [DBG] pgmap v46: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:02 vm11 bash[23232]: audit 2026-03-08T23:05:02.106668+0000 mgr.y (mgr.24419) 90 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:02 vm11 bash[23232]: audit 2026-03-08T23:05:02.106668+0000 mgr.y (mgr.24419) 90 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:04.785 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:04.989 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:04.989 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:04.989 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:05 vm06 bash[20625]: cluster 2026-03-08T23:05:03.639993+0000 mgr.y (mgr.24419) 91 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:05 vm06 bash[20625]: cluster 2026-03-08T23:05:03.639993+0000 mgr.y (mgr.24419) 91 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:05 vm06 bash[20625]: audit 2026-03-08T23:05:04.977269+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.106:0/432215427' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:05 vm06 bash[20625]: audit 2026-03-08T23:05:04.977269+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.106:0/432215427' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:05 vm06 bash[27746]: cluster 2026-03-08T23:05:03.639993+0000 mgr.y (mgr.24419) 91 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:05 vm06 bash[27746]: cluster 2026-03-08T23:05:03.639993+0000 mgr.y (mgr.24419) 91 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:05 vm06 bash[27746]: audit 2026-03-08T23:05:04.977269+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.106:0/432215427' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:05.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:05 vm06 bash[27746]: audit 2026-03-08T23:05:04.977269+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.106:0/432215427' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:05 vm11 bash[23232]: cluster 2026-03-08T23:05:03.639993+0000 mgr.y (mgr.24419) 91 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:05 vm11 bash[23232]: cluster 2026-03-08T23:05:03.639993+0000 mgr.y (mgr.24419) 91 : cluster [DBG] pgmap v47: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:05.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:05 vm11 bash[23232]: audit 2026-03-08T23:05:04.977269+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.106:0/432215427' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:05.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:05 vm11 bash[23232]: audit 2026-03-08T23:05:04.977269+0000 mon.b (mon.1) 38 : audit [INF] from='client.? 192.168.123.106:0/432215427' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:07.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:07 vm06 bash[20625]: cluster 2026-03-08T23:05:05.640473+0000 mgr.y (mgr.24419) 92 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:07.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:07 vm06 bash[20625]: cluster 2026-03-08T23:05:05.640473+0000 mgr.y (mgr.24419) 92 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:07.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:07 vm06 bash[27746]: cluster 2026-03-08T23:05:05.640473+0000 mgr.y (mgr.24419) 92 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:07.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:07 vm06 bash[27746]: cluster 2026-03-08T23:05:05.640473+0000 mgr.y (mgr.24419) 92 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:07 vm11 bash[23232]: cluster 2026-03-08T23:05:05.640473+0000 mgr.y (mgr.24419) 92 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:07.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:07 vm11 bash[23232]: cluster 2026-03-08T23:05:05.640473+0000 mgr.y (mgr.24419) 92 : cluster [DBG] pgmap v48: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:08.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:08 vm06 bash[20625]: audit 2026-03-08T23:05:07.769807+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:08.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:08 vm06 bash[20625]: audit 2026-03-08T23:05:07.769807+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:08.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:08 vm06 bash[27746]: audit 2026-03-08T23:05:07.769807+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:08.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:08 vm06 bash[27746]: audit 2026-03-08T23:05:07.769807+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:08 vm11 bash[23232]: audit 2026-03-08T23:05:07.769807+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:08 vm11 bash[23232]: audit 2026-03-08T23:05:07.769807+0000 mon.c (mon.2) 72 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:09 vm06 bash[20625]: cluster 2026-03-08T23:05:07.640720+0000 mgr.y (mgr.24419) 93 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:09 vm06 bash[20625]: cluster 2026-03-08T23:05:07.640720+0000 mgr.y (mgr.24419) 93 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:09 vm06 bash[27746]: cluster 2026-03-08T23:05:07.640720+0000 mgr.y (mgr.24419) 93 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:09 vm06 bash[27746]: cluster 2026-03-08T23:05:07.640720+0000 mgr.y (mgr.24419) 93 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:09 vm11 bash[23232]: cluster 2026-03-08T23:05:07.640720+0000 mgr.y (mgr.24419) 93 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:09 vm11 bash[23232]: cluster 2026-03-08T23:05:07.640720+0000 mgr.y (mgr.24419) 93 : cluster [DBG] pgmap v49: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:09.991 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:10.194 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:10.194 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:10.194 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:05:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:05:10] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:11 vm06 bash[20625]: cluster 2026-03-08T23:05:09.640940+0000 mgr.y (mgr.24419) 94 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:11 vm06 bash[20625]: cluster 2026-03-08T23:05:09.640940+0000 mgr.y (mgr.24419) 94 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:11 vm06 bash[20625]: audit 2026-03-08T23:05:10.185013+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.106:0/992106183' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:11 vm06 bash[20625]: audit 2026-03-08T23:05:10.185013+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.106:0/992106183' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:11 vm06 bash[27746]: cluster 2026-03-08T23:05:09.640940+0000 mgr.y (mgr.24419) 94 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:11 vm06 bash[27746]: cluster 2026-03-08T23:05:09.640940+0000 mgr.y (mgr.24419) 94 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:11 vm06 bash[27746]: audit 2026-03-08T23:05:10.185013+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.106:0/992106183' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:11 vm06 bash[27746]: audit 2026-03-08T23:05:10.185013+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.106:0/992106183' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:11 vm11 bash[23232]: cluster 2026-03-08T23:05:09.640940+0000 mgr.y (mgr.24419) 94 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:11 vm11 bash[23232]: cluster 2026-03-08T23:05:09.640940+0000 mgr.y (mgr.24419) 94 : cluster [DBG] pgmap v50: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:11 vm11 bash[23232]: audit 2026-03-08T23:05:10.185013+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.106:0/992106183' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:11 vm11 bash[23232]: audit 2026-03-08T23:05:10.185013+0000 mon.a (mon.0) 816 : audit [INF] from='client.? 192.168.123.106:0/992106183' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:12.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:05:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:05:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:13 vm06 bash[20625]: cluster 2026-03-08T23:05:11.641443+0000 mgr.y (mgr.24419) 95 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:13 vm06 bash[20625]: cluster 2026-03-08T23:05:11.641443+0000 mgr.y (mgr.24419) 95 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:13 vm06 bash[20625]: audit 2026-03-08T23:05:12.117398+0000 mgr.y (mgr.24419) 96 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:13 vm06 bash[20625]: audit 2026-03-08T23:05:12.117398+0000 mgr.y (mgr.24419) 96 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:13 vm06 bash[27746]: cluster 2026-03-08T23:05:11.641443+0000 mgr.y (mgr.24419) 95 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:13 vm06 bash[27746]: cluster 2026-03-08T23:05:11.641443+0000 mgr.y (mgr.24419) 95 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:13 vm06 bash[27746]: audit 2026-03-08T23:05:12.117398+0000 mgr.y (mgr.24419) 96 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:13.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:13 vm06 bash[27746]: audit 2026-03-08T23:05:12.117398+0000 mgr.y (mgr.24419) 96 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:13.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:13 vm11 bash[23232]: cluster 2026-03-08T23:05:11.641443+0000 mgr.y (mgr.24419) 95 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:13 vm11 bash[23232]: cluster 2026-03-08T23:05:11.641443+0000 mgr.y (mgr.24419) 95 : cluster [DBG] pgmap v51: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:13 vm11 bash[23232]: audit 2026-03-08T23:05:12.117398+0000 mgr.y (mgr.24419) 96 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:13 vm11 bash[23232]: audit 2026-03-08T23:05:12.117398+0000 mgr.y (mgr.24419) 96 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:14 vm06 bash[20625]: cluster 2026-03-08T23:05:13.641811+0000 mgr.y (mgr.24419) 97 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:14 vm06 bash[20625]: cluster 2026-03-08T23:05:13.641811+0000 mgr.y (mgr.24419) 97 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:15.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:14 vm06 bash[27746]: cluster 2026-03-08T23:05:13.641811+0000 mgr.y (mgr.24419) 97 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:15.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:14 vm06 bash[27746]: cluster 2026-03-08T23:05:13.641811+0000 mgr.y (mgr.24419) 97 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:15.196 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:15.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:14 vm11 bash[23232]: cluster 2026-03-08T23:05:13.641811+0000 mgr.y (mgr.24419) 97 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:15.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:14 vm11 bash[23232]: cluster 2026-03-08T23:05:13.641811+0000 mgr.y (mgr.24419) 97 : cluster [DBG] pgmap v52: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:15.392 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:15.393 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:15.393 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:16.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:15 vm06 bash[20625]: audit 2026-03-08T23:05:15.382734+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.106:0/870158600' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:16.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:15 vm06 bash[20625]: audit 2026-03-08T23:05:15.382734+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.106:0/870158600' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:16.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:15 vm06 bash[27746]: audit 2026-03-08T23:05:15.382734+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.106:0/870158600' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:16.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:15 vm06 bash[27746]: audit 2026-03-08T23:05:15.382734+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.106:0/870158600' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:16.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:15 vm11 bash[23232]: audit 2026-03-08T23:05:15.382734+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.106:0/870158600' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:16.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:15 vm11 bash[23232]: audit 2026-03-08T23:05:15.382734+0000 mon.c (mon.2) 73 : audit [INF] from='client.? 192.168.123.106:0/870158600' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:17.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:16 vm06 bash[20625]: cluster 2026-03-08T23:05:15.642328+0000 mgr.y (mgr.24419) 98 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:17.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:16 vm06 bash[20625]: cluster 2026-03-08T23:05:15.642328+0000 mgr.y (mgr.24419) 98 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:17.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:16 vm06 bash[27746]: cluster 2026-03-08T23:05:15.642328+0000 mgr.y (mgr.24419) 98 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:17.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:16 vm06 bash[27746]: cluster 2026-03-08T23:05:15.642328+0000 mgr.y (mgr.24419) 98 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:16 vm11 bash[23232]: cluster 2026-03-08T23:05:15.642328+0000 mgr.y (mgr.24419) 98 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:17.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:16 vm11 bash[23232]: cluster 2026-03-08T23:05:15.642328+0000 mgr.y (mgr.24419) 98 : cluster [DBG] pgmap v53: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:19.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:18 vm11 bash[23232]: cluster 2026-03-08T23:05:17.642572+0000 mgr.y (mgr.24419) 99 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:19.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:18 vm11 bash[23232]: cluster 2026-03-08T23:05:17.642572+0000 mgr.y (mgr.24419) 99 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:18 vm06 bash[20625]: cluster 2026-03-08T23:05:17.642572+0000 mgr.y (mgr.24419) 99 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:18 vm06 bash[20625]: cluster 2026-03-08T23:05:17.642572+0000 mgr.y (mgr.24419) 99 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:18 vm06 bash[27746]: cluster 2026-03-08T23:05:17.642572+0000 mgr.y (mgr.24419) 99 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:18 vm06 bash[27746]: cluster 2026-03-08T23:05:17.642572+0000 mgr.y (mgr.24419) 99 : cluster [DBG] pgmap v54: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:20.395 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:20.597 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:20.597 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:20.597 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:05:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:05:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:20 vm06 bash[27746]: cluster 2026-03-08T23:05:19.642819+0000 mgr.y (mgr.24419) 100 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:20 vm06 bash[27746]: cluster 2026-03-08T23:05:19.642819+0000 mgr.y (mgr.24419) 100 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:20 vm06 bash[27746]: audit 2026-03-08T23:05:20.586722+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.106:0/3090953472' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:20 vm06 bash[27746]: audit 2026-03-08T23:05:20.586722+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.106:0/3090953472' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:20 vm06 bash[20625]: cluster 2026-03-08T23:05:19.642819+0000 mgr.y (mgr.24419) 100 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:20 vm06 bash[20625]: cluster 2026-03-08T23:05:19.642819+0000 mgr.y (mgr.24419) 100 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:20 vm06 bash[20625]: audit 2026-03-08T23:05:20.586722+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.106:0/3090953472' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:20 vm06 bash[20625]: audit 2026-03-08T23:05:20.586722+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.106:0/3090953472' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:20 vm11 bash[23232]: cluster 2026-03-08T23:05:19.642819+0000 mgr.y (mgr.24419) 100 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:20 vm11 bash[23232]: cluster 2026-03-08T23:05:19.642819+0000 mgr.y (mgr.24419) 100 : cluster [DBG] pgmap v55: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:20 vm11 bash[23232]: audit 2026-03-08T23:05:20.586722+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.106:0/3090953472' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:20 vm11 bash[23232]: audit 2026-03-08T23:05:20.586722+0000 mon.b (mon.1) 39 : audit [INF] from='client.? 192.168.123.106:0/3090953472' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:22.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:05:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:05:23.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:22 vm11 bash[23232]: cluster 2026-03-08T23:05:21.643319+0000 mgr.y (mgr.24419) 101 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:22 vm11 bash[23232]: cluster 2026-03-08T23:05:21.643319+0000 mgr.y (mgr.24419) 101 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:22 vm11 bash[23232]: audit 2026-03-08T23:05:22.128143+0000 mgr.y (mgr.24419) 102 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:22 vm11 bash[23232]: audit 2026-03-08T23:05:22.128143+0000 mgr.y (mgr.24419) 102 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:22 vm11 bash[23232]: audit 2026-03-08T23:05:22.775811+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:22 vm11 bash[23232]: audit 2026-03-08T23:05:22.775811+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:22 vm06 bash[27746]: cluster 2026-03-08T23:05:21.643319+0000 mgr.y (mgr.24419) 101 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:22 vm06 bash[27746]: cluster 2026-03-08T23:05:21.643319+0000 mgr.y (mgr.24419) 101 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:22 vm06 bash[27746]: audit 2026-03-08T23:05:22.128143+0000 mgr.y (mgr.24419) 102 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:22 vm06 bash[27746]: audit 2026-03-08T23:05:22.128143+0000 mgr.y (mgr.24419) 102 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:22 vm06 bash[27746]: audit 2026-03-08T23:05:22.775811+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:22 vm06 bash[27746]: audit 2026-03-08T23:05:22.775811+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:22 vm06 bash[20625]: cluster 2026-03-08T23:05:21.643319+0000 mgr.y (mgr.24419) 101 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:22 vm06 bash[20625]: cluster 2026-03-08T23:05:21.643319+0000 mgr.y (mgr.24419) 101 : cluster [DBG] pgmap v56: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:22 vm06 bash[20625]: audit 2026-03-08T23:05:22.128143+0000 mgr.y (mgr.24419) 102 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:22 vm06 bash[20625]: audit 2026-03-08T23:05:22.128143+0000 mgr.y (mgr.24419) 102 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:22 vm06 bash[20625]: audit 2026-03-08T23:05:22.775811+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:22 vm06 bash[20625]: audit 2026-03-08T23:05:22.775811+0000 mon.c (mon.2) 74 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:25.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:25 vm06 bash[20625]: cluster 2026-03-08T23:05:23.643572+0000 mgr.y (mgr.24419) 103 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:25.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:25 vm06 bash[20625]: cluster 2026-03-08T23:05:23.643572+0000 mgr.y (mgr.24419) 103 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:25.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:25 vm06 bash[27746]: cluster 2026-03-08T23:05:23.643572+0000 mgr.y (mgr.24419) 103 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:25.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:25 vm06 bash[27746]: cluster 2026-03-08T23:05:23.643572+0000 mgr.y (mgr.24419) 103 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:25.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:25 vm11 bash[23232]: cluster 2026-03-08T23:05:23.643572+0000 mgr.y (mgr.24419) 103 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:25.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:25 vm11 bash[23232]: cluster 2026-03-08T23:05:23.643572+0000 mgr.y (mgr.24419) 103 : cluster [DBG] pgmap v57: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:25.599 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:25.787 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:25.787 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:25.787 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:26.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:26 vm06 bash[27746]: audit 2026-03-08T23:05:25.778683+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.106:0/2671842797' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:26.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:26 vm06 bash[27746]: audit 2026-03-08T23:05:25.778683+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.106:0/2671842797' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:26.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:26 vm06 bash[20625]: audit 2026-03-08T23:05:25.778683+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.106:0/2671842797' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:26.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:26 vm06 bash[20625]: audit 2026-03-08T23:05:25.778683+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.106:0/2671842797' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:26.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:26 vm11 bash[23232]: audit 2026-03-08T23:05:25.778683+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.106:0/2671842797' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:26.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:26 vm11 bash[23232]: audit 2026-03-08T23:05:25.778683+0000 mon.c (mon.2) 75 : audit [INF] from='client.? 192.168.123.106:0/2671842797' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:27.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:27 vm06 bash[20625]: cluster 2026-03-08T23:05:25.644003+0000 mgr.y (mgr.24419) 104 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:27.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:27 vm06 bash[20625]: cluster 2026-03-08T23:05:25.644003+0000 mgr.y (mgr.24419) 104 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:27.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:27 vm06 bash[27746]: cluster 2026-03-08T23:05:25.644003+0000 mgr.y (mgr.24419) 104 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:27.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:27 vm06 bash[27746]: cluster 2026-03-08T23:05:25.644003+0000 mgr.y (mgr.24419) 104 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:27.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:27 vm11 bash[23232]: cluster 2026-03-08T23:05:25.644003+0000 mgr.y (mgr.24419) 104 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:27.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:27 vm11 bash[23232]: cluster 2026-03-08T23:05:25.644003+0000 mgr.y (mgr.24419) 104 : cluster [DBG] pgmap v58: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:29.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:29 vm06 bash[20625]: cluster 2026-03-08T23:05:27.644256+0000 mgr.y (mgr.24419) 105 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:29.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:29 vm06 bash[20625]: cluster 2026-03-08T23:05:27.644256+0000 mgr.y (mgr.24419) 105 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:29.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:29 vm06 bash[27746]: cluster 2026-03-08T23:05:27.644256+0000 mgr.y (mgr.24419) 105 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:29.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:29 vm06 bash[27746]: cluster 2026-03-08T23:05:27.644256+0000 mgr.y (mgr.24419) 105 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:29.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:29 vm11 bash[23232]: cluster 2026-03-08T23:05:27.644256+0000 mgr.y (mgr.24419) 105 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:29.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:29 vm11 bash[23232]: cluster 2026-03-08T23:05:27.644256+0000 mgr.y (mgr.24419) 105 : cluster [DBG] pgmap v59: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:30.789 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:31.021 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:31.021 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:31.021 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:31.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:05:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:05:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:31 vm06 bash[20625]: cluster 2026-03-08T23:05:29.644666+0000 mgr.y (mgr.24419) 106 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:31 vm06 bash[20625]: cluster 2026-03-08T23:05:29.644666+0000 mgr.y (mgr.24419) 106 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:31 vm06 bash[20625]: audit 2026-03-08T23:05:31.010211+0000 mon.a (mon.0) 817 : audit [INF] from='client.? 192.168.123.106:0/3131482394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:31 vm06 bash[20625]: audit 2026-03-08T23:05:31.010211+0000 mon.a (mon.0) 817 : audit [INF] from='client.? 192.168.123.106:0/3131482394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:31 vm06 bash[27746]: cluster 2026-03-08T23:05:29.644666+0000 mgr.y (mgr.24419) 106 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:31 vm06 bash[27746]: cluster 2026-03-08T23:05:29.644666+0000 mgr.y (mgr.24419) 106 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:31 vm06 bash[27746]: audit 2026-03-08T23:05:31.010211+0000 mon.a (mon.0) 817 : audit [INF] from='client.? 192.168.123.106:0/3131482394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:31.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:31 vm06 bash[27746]: audit 2026-03-08T23:05:31.010211+0000 mon.a (mon.0) 817 : audit [INF] from='client.? 192.168.123.106:0/3131482394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:31.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:31 vm11 bash[23232]: cluster 2026-03-08T23:05:29.644666+0000 mgr.y (mgr.24419) 106 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:31.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:31 vm11 bash[23232]: cluster 2026-03-08T23:05:29.644666+0000 mgr.y (mgr.24419) 106 : cluster [DBG] pgmap v60: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:31.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:31 vm11 bash[23232]: audit 2026-03-08T23:05:31.010211+0000 mon.a (mon.0) 817 : audit [INF] from='client.? 192.168.123.106:0/3131482394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:31.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:31 vm11 bash[23232]: audit 2026-03-08T23:05:31.010211+0000 mon.a (mon.0) 817 : audit [INF] from='client.? 192.168.123.106:0/3131482394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:32.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:05:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:33 vm06 bash[20625]: cluster 2026-03-08T23:05:31.645160+0000 mgr.y (mgr.24419) 107 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:33 vm06 bash[20625]: cluster 2026-03-08T23:05:31.645160+0000 mgr.y (mgr.24419) 107 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:33 vm06 bash[20625]: audit 2026-03-08T23:05:32.133616+0000 mgr.y (mgr.24419) 108 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:33 vm06 bash[20625]: audit 2026-03-08T23:05:32.133616+0000 mgr.y (mgr.24419) 108 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:33 vm06 bash[27746]: cluster 2026-03-08T23:05:31.645160+0000 mgr.y (mgr.24419) 107 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:33 vm06 bash[27746]: cluster 2026-03-08T23:05:31.645160+0000 mgr.y (mgr.24419) 107 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:33 vm06 bash[27746]: audit 2026-03-08T23:05:32.133616+0000 mgr.y (mgr.24419) 108 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:33.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:33 vm06 bash[27746]: audit 2026-03-08T23:05:32.133616+0000 mgr.y (mgr.24419) 108 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:33.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:33 vm11 bash[23232]: cluster 2026-03-08T23:05:31.645160+0000 mgr.y (mgr.24419) 107 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:33 vm11 bash[23232]: cluster 2026-03-08T23:05:31.645160+0000 mgr.y (mgr.24419) 107 : cluster [DBG] pgmap v61: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:33 vm11 bash[23232]: audit 2026-03-08T23:05:32.133616+0000 mgr.y (mgr.24419) 108 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:33 vm11 bash[23232]: audit 2026-03-08T23:05:32.133616+0000 mgr.y (mgr.24419) 108 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:35 vm06 bash[20625]: cluster 2026-03-08T23:05:33.645437+0000 mgr.y (mgr.24419) 109 : cluster [DBG] pgmap v62: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:35 vm06 bash[20625]: cluster 2026-03-08T23:05:33.645437+0000 mgr.y (mgr.24419) 109 : cluster [DBG] pgmap v62: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:35 vm06 bash[27746]: cluster 2026-03-08T23:05:33.645437+0000 mgr.y (mgr.24419) 109 : cluster [DBG] pgmap v62: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:35 vm06 bash[27746]: cluster 2026-03-08T23:05:33.645437+0000 mgr.y (mgr.24419) 109 : cluster [DBG] pgmap v62: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:35 vm11 bash[23232]: cluster 2026-03-08T23:05:33.645437+0000 mgr.y (mgr.24419) 109 : cluster [DBG] pgmap v62: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:35 vm11 bash[23232]: cluster 2026-03-08T23:05:33.645437+0000 mgr.y (mgr.24419) 109 : cluster [DBG] pgmap v62: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:36.022 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:36.219 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:36.219 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:36.219 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:37 vm06 bash[20625]: cluster 2026-03-08T23:05:35.645907+0000 mgr.y (mgr.24419) 110 : cluster [DBG] pgmap v63: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:37 vm06 bash[20625]: cluster 2026-03-08T23:05:35.645907+0000 mgr.y (mgr.24419) 110 : cluster [DBG] pgmap v63: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:37 vm06 bash[20625]: audit 2026-03-08T23:05:36.211155+0000 mon.a (mon.0) 818 : audit [INF] from='client.? 192.168.123.106:0/124702659' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:37 vm06 bash[20625]: audit 2026-03-08T23:05:36.211155+0000 mon.a (mon.0) 818 : audit [INF] from='client.? 192.168.123.106:0/124702659' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:37 vm06 bash[27746]: cluster 2026-03-08T23:05:35.645907+0000 mgr.y (mgr.24419) 110 : cluster [DBG] pgmap v63: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:37 vm06 bash[27746]: cluster 2026-03-08T23:05:35.645907+0000 mgr.y (mgr.24419) 110 : cluster [DBG] pgmap v63: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:37.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:37 vm06 bash[27746]: audit 2026-03-08T23:05:36.211155+0000 mon.a (mon.0) 818 : audit [INF] from='client.? 192.168.123.106:0/124702659' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:37.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:37 vm06 bash[27746]: audit 2026-03-08T23:05:36.211155+0000 mon.a (mon.0) 818 : audit [INF] from='client.? 192.168.123.106:0/124702659' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:37 vm11 bash[23232]: cluster 2026-03-08T23:05:35.645907+0000 mgr.y (mgr.24419) 110 : cluster [DBG] pgmap v63: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:37 vm11 bash[23232]: cluster 2026-03-08T23:05:35.645907+0000 mgr.y (mgr.24419) 110 : cluster [DBG] pgmap v63: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:37 vm11 bash[23232]: audit 2026-03-08T23:05:36.211155+0000 mon.a (mon.0) 818 : audit [INF] from='client.? 192.168.123.106:0/124702659' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:37 vm11 bash[23232]: audit 2026-03-08T23:05:36.211155+0000 mon.a (mon.0) 818 : audit [INF] from='client.? 192.168.123.106:0/124702659' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:38.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:38 vm06 bash[20625]: audit 2026-03-08T23:05:37.781920+0000 mon.c (mon.2) 76 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:38.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:38 vm06 bash[20625]: audit 2026-03-08T23:05:37.781920+0000 mon.c (mon.2) 76 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:38.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:38 vm06 bash[27746]: audit 2026-03-08T23:05:37.781920+0000 mon.c (mon.2) 76 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:38.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:38 vm06 bash[27746]: audit 2026-03-08T23:05:37.781920+0000 mon.c (mon.2) 76 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:38.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:38 vm11 bash[23232]: audit 2026-03-08T23:05:37.781920+0000 mon.c (mon.2) 76 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:38.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:38 vm11 bash[23232]: audit 2026-03-08T23:05:37.781920+0000 mon.c (mon.2) 76 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:39 vm06 bash[20625]: cluster 2026-03-08T23:05:37.646250+0000 mgr.y (mgr.24419) 111 : cluster [DBG] pgmap v64: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:39 vm06 bash[20625]: cluster 2026-03-08T23:05:37.646250+0000 mgr.y (mgr.24419) 111 : cluster [DBG] pgmap v64: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:39 vm06 bash[27746]: cluster 2026-03-08T23:05:37.646250+0000 mgr.y (mgr.24419) 111 : cluster [DBG] pgmap v64: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:39 vm06 bash[27746]: cluster 2026-03-08T23:05:37.646250+0000 mgr.y (mgr.24419) 111 : cluster [DBG] pgmap v64: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:39.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:39 vm11 bash[23232]: cluster 2026-03-08T23:05:37.646250+0000 mgr.y (mgr.24419) 111 : cluster [DBG] pgmap v64: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:39.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:39 vm11 bash[23232]: cluster 2026-03-08T23:05:37.646250+0000 mgr.y (mgr.24419) 111 : cluster [DBG] pgmap v64: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:40 vm06 bash[20625]: cluster 2026-03-08T23:05:39.646597+0000 mgr.y (mgr.24419) 112 : cluster [DBG] pgmap v65: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:40 vm06 bash[20625]: cluster 2026-03-08T23:05:39.646597+0000 mgr.y (mgr.24419) 112 : cluster [DBG] pgmap v65: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:40 vm06 bash[27746]: cluster 2026-03-08T23:05:39.646597+0000 mgr.y (mgr.24419) 112 : cluster [DBG] pgmap v65: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:40.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:40 vm06 bash[27746]: cluster 2026-03-08T23:05:39.646597+0000 mgr.y (mgr.24419) 112 : cluster [DBG] pgmap v65: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:40.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:40 vm11 bash[23232]: cluster 2026-03-08T23:05:39.646597+0000 mgr.y (mgr.24419) 112 : cluster [DBG] pgmap v65: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:40.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:40 vm11 bash[23232]: cluster 2026-03-08T23:05:39.646597+0000 mgr.y (mgr.24419) 112 : cluster [DBG] pgmap v65: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:05:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:05:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:05:41.221 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:41.411 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:41.411 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:41.411 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:42.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:41 vm06 bash[20625]: audit 2026-03-08T23:05:41.402317+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.106:0/3853648080' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:42.226 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:41 vm06 bash[20625]: audit 2026-03-08T23:05:41.402317+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.106:0/3853648080' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:42.226 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:41 vm06 bash[27746]: audit 2026-03-08T23:05:41.402317+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.106:0/3853648080' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:42.226 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:41 vm06 bash[27746]: audit 2026-03-08T23:05:41.402317+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.106:0/3853648080' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:42.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:41 vm11 bash[23232]: audit 2026-03-08T23:05:41.402317+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.106:0/3853648080' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:42.226 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:41 vm11 bash[23232]: audit 2026-03-08T23:05:41.402317+0000 mon.a (mon.0) 819 : audit [INF] from='client.? 192.168.123.106:0/3853648080' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:42.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:05:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:42 vm06 bash[20625]: cluster 2026-03-08T23:05:41.647060+0000 mgr.y (mgr.24419) 113 : cluster [DBG] pgmap v66: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:42 vm06 bash[20625]: cluster 2026-03-08T23:05:41.647060+0000 mgr.y (mgr.24419) 113 : cluster [DBG] pgmap v66: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:42 vm06 bash[20625]: audit 2026-03-08T23:05:42.139018+0000 mgr.y (mgr.24419) 114 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:42 vm06 bash[20625]: audit 2026-03-08T23:05:42.139018+0000 mgr.y (mgr.24419) 114 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:42 vm06 bash[27746]: cluster 2026-03-08T23:05:41.647060+0000 mgr.y (mgr.24419) 113 : cluster [DBG] pgmap v66: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:42 vm06 bash[27746]: cluster 2026-03-08T23:05:41.647060+0000 mgr.y (mgr.24419) 113 : cluster [DBG] pgmap v66: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:42 vm06 bash[27746]: audit 2026-03-08T23:05:42.139018+0000 mgr.y (mgr.24419) 114 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:43.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:42 vm06 bash[27746]: audit 2026-03-08T23:05:42.139018+0000 mgr.y (mgr.24419) 114 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:43.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:42 vm11 bash[23232]: cluster 2026-03-08T23:05:41.647060+0000 mgr.y (mgr.24419) 113 : cluster [DBG] pgmap v66: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:43.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:42 vm11 bash[23232]: cluster 2026-03-08T23:05:41.647060+0000 mgr.y (mgr.24419) 113 : cluster [DBG] pgmap v66: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:43.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:42 vm11 bash[23232]: audit 2026-03-08T23:05:42.139018+0000 mgr.y (mgr.24419) 114 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:43.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:42 vm11 bash[23232]: audit 2026-03-08T23:05:42.139018+0000 mgr.y (mgr.24419) 114 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:45.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:44 vm06 bash[20625]: cluster 2026-03-08T23:05:43.647365+0000 mgr.y (mgr.24419) 115 : cluster [DBG] pgmap v67: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:45.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:44 vm06 bash[20625]: cluster 2026-03-08T23:05:43.647365+0000 mgr.y (mgr.24419) 115 : cluster [DBG] pgmap v67: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:45.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:44 vm06 bash[27746]: cluster 2026-03-08T23:05:43.647365+0000 mgr.y (mgr.24419) 115 : cluster [DBG] pgmap v67: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:45.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:44 vm06 bash[27746]: cluster 2026-03-08T23:05:43.647365+0000 mgr.y (mgr.24419) 115 : cluster [DBG] pgmap v67: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:45.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:44 vm11 bash[23232]: cluster 2026-03-08T23:05:43.647365+0000 mgr.y (mgr.24419) 115 : cluster [DBG] pgmap v67: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:45.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:44 vm11 bash[23232]: cluster 2026-03-08T23:05:43.647365+0000 mgr.y (mgr.24419) 115 : cluster [DBG] pgmap v67: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:46.413 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:46.599 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:46.599 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:46.599 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:47.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:47 vm11 bash[23232]: cluster 2026-03-08T23:05:45.647935+0000 mgr.y (mgr.24419) 116 : cluster [DBG] pgmap v68: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:47 vm11 bash[23232]: cluster 2026-03-08T23:05:45.647935+0000 mgr.y (mgr.24419) 116 : cluster [DBG] pgmap v68: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:47 vm11 bash[23232]: audit 2026-03-08T23:05:46.591520+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.106:0/2368506394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:47 vm11 bash[23232]: audit 2026-03-08T23:05:46.591520+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.106:0/2368506394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:47 vm06 bash[20625]: cluster 2026-03-08T23:05:45.647935+0000 mgr.y (mgr.24419) 116 : cluster [DBG] pgmap v68: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:47 vm06 bash[20625]: cluster 2026-03-08T23:05:45.647935+0000 mgr.y (mgr.24419) 116 : cluster [DBG] pgmap v68: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:47 vm06 bash[20625]: audit 2026-03-08T23:05:46.591520+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.106:0/2368506394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:47 vm06 bash[20625]: audit 2026-03-08T23:05:46.591520+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.106:0/2368506394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:47 vm06 bash[27746]: cluster 2026-03-08T23:05:45.647935+0000 mgr.y (mgr.24419) 116 : cluster [DBG] pgmap v68: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:47 vm06 bash[27746]: cluster 2026-03-08T23:05:45.647935+0000 mgr.y (mgr.24419) 116 : cluster [DBG] pgmap v68: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:47 vm06 bash[27746]: audit 2026-03-08T23:05:46.591520+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.106:0/2368506394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:47 vm06 bash[27746]: audit 2026-03-08T23:05:46.591520+0000 mon.a (mon.0) 820 : audit [INF] from='client.? 192.168.123.106:0/2368506394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:49 vm06 bash[20625]: cluster 2026-03-08T23:05:47.648240+0000 mgr.y (mgr.24419) 117 : cluster [DBG] pgmap v69: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:49 vm06 bash[20625]: cluster 2026-03-08T23:05:47.648240+0000 mgr.y (mgr.24419) 117 : cluster [DBG] pgmap v69: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:49 vm06 bash[27746]: cluster 2026-03-08T23:05:47.648240+0000 mgr.y (mgr.24419) 117 : cluster [DBG] pgmap v69: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:49.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:49 vm06 bash[27746]: cluster 2026-03-08T23:05:47.648240+0000 mgr.y (mgr.24419) 117 : cluster [DBG] pgmap v69: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:49 vm11 bash[23232]: cluster 2026-03-08T23:05:47.648240+0000 mgr.y (mgr.24419) 117 : cluster [DBG] pgmap v69: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:49 vm11 bash[23232]: cluster 2026-03-08T23:05:47.648240+0000 mgr.y (mgr.24419) 117 : cluster [DBG] pgmap v69: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:05:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:05:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: cluster 2026-03-08T23:05:49.648541+0000 mgr.y (mgr.24419) 118 : cluster [DBG] pgmap v70: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: cluster 2026-03-08T23:05:49.648541+0000 mgr.y (mgr.24419) 118 : cluster [DBG] pgmap v70: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.391519+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.391519+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.743061+0000 mon.c (mon.2) 78 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.743061+0000 mon.c (mon.2) 78 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.744144+0000 mon.c (mon.2) 79 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.744144+0000 mon.c (mon.2) 79 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.750921+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:05:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:51 vm06 bash[20625]: audit 2026-03-08T23:05:50.750921+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: cluster 2026-03-08T23:05:49.648541+0000 mgr.y (mgr.24419) 118 : cluster [DBG] pgmap v70: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: cluster 2026-03-08T23:05:49.648541+0000 mgr.y (mgr.24419) 118 : cluster [DBG] pgmap v70: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.391519+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.391519+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.743061+0000 mon.c (mon.2) 78 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.743061+0000 mon.c (mon.2) 78 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.744144+0000 mon.c (mon.2) 79 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.744144+0000 mon.c (mon.2) 79 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.750921+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:05:51.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:51 vm06 bash[27746]: audit 2026-03-08T23:05:50.750921+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: cluster 2026-03-08T23:05:49.648541+0000 mgr.y (mgr.24419) 118 : cluster [DBG] pgmap v70: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: cluster 2026-03-08T23:05:49.648541+0000 mgr.y (mgr.24419) 118 : cluster [DBG] pgmap v70: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.391519+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.391519+0000 mon.c (mon.2) 77 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.743061+0000 mon.c (mon.2) 78 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.743061+0000 mon.c (mon.2) 78 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.744144+0000 mon.c (mon.2) 79 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.744144+0000 mon.c (mon.2) 79 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.750921+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:05:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:51 vm11 bash[23232]: audit 2026-03-08T23:05:50.750921+0000 mon.a (mon.0) 821 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:05:51.601 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:51.794 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:51.794 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:51.794 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:52 vm06 bash[20625]: audit 2026-03-08T23:05:51.785867+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.106:0/183055199' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:52 vm06 bash[20625]: audit 2026-03-08T23:05:51.785867+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.106:0/183055199' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:52.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:52 vm06 bash[27746]: audit 2026-03-08T23:05:51.785867+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.106:0/183055199' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:52.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:52 vm06 bash[27746]: audit 2026-03-08T23:05:51.785867+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.106:0/183055199' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:52 vm11 bash[23232]: audit 2026-03-08T23:05:51.785867+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.106:0/183055199' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:52 vm11 bash[23232]: audit 2026-03-08T23:05:51.785867+0000 mon.a (mon.0) 822 : audit [INF] from='client.? 192.168.123.106:0/183055199' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:52.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:05:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:53 vm06 bash[20625]: cluster 2026-03-08T23:05:51.649325+0000 mgr.y (mgr.24419) 119 : cluster [DBG] pgmap v71: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:53 vm06 bash[20625]: cluster 2026-03-08T23:05:51.649325+0000 mgr.y (mgr.24419) 119 : cluster [DBG] pgmap v71: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:53 vm06 bash[20625]: audit 2026-03-08T23:05:52.146775+0000 mgr.y (mgr.24419) 120 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:53 vm06 bash[20625]: audit 2026-03-08T23:05:52.146775+0000 mgr.y (mgr.24419) 120 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:53 vm06 bash[20625]: audit 2026-03-08T23:05:52.788007+0000 mon.c (mon.2) 80 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:53 vm06 bash[20625]: audit 2026-03-08T23:05:52.788007+0000 mon.c (mon.2) 80 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:53 vm06 bash[27746]: cluster 2026-03-08T23:05:51.649325+0000 mgr.y (mgr.24419) 119 : cluster [DBG] pgmap v71: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:53 vm06 bash[27746]: cluster 2026-03-08T23:05:51.649325+0000 mgr.y (mgr.24419) 119 : cluster [DBG] pgmap v71: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:53 vm06 bash[27746]: audit 2026-03-08T23:05:52.146775+0000 mgr.y (mgr.24419) 120 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:53 vm06 bash[27746]: audit 2026-03-08T23:05:52.146775+0000 mgr.y (mgr.24419) 120 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:53 vm06 bash[27746]: audit 2026-03-08T23:05:52.788007+0000 mon.c (mon.2) 80 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:53 vm06 bash[27746]: audit 2026-03-08T23:05:52.788007+0000 mon.c (mon.2) 80 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:53 vm11 bash[23232]: cluster 2026-03-08T23:05:51.649325+0000 mgr.y (mgr.24419) 119 : cluster [DBG] pgmap v71: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:53 vm11 bash[23232]: cluster 2026-03-08T23:05:51.649325+0000 mgr.y (mgr.24419) 119 : cluster [DBG] pgmap v71: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:53 vm11 bash[23232]: audit 2026-03-08T23:05:52.146775+0000 mgr.y (mgr.24419) 120 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:53 vm11 bash[23232]: audit 2026-03-08T23:05:52.146775+0000 mgr.y (mgr.24419) 120 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:05:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:53 vm11 bash[23232]: audit 2026-03-08T23:05:52.788007+0000 mon.c (mon.2) 80 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:53 vm11 bash[23232]: audit 2026-03-08T23:05:52.788007+0000 mon.c (mon.2) 80 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:05:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:55 vm06 bash[20625]: cluster 2026-03-08T23:05:53.649638+0000 mgr.y (mgr.24419) 121 : cluster [DBG] pgmap v72: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:55 vm06 bash[20625]: cluster 2026-03-08T23:05:53.649638+0000 mgr.y (mgr.24419) 121 : cluster [DBG] pgmap v72: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:55 vm06 bash[27746]: cluster 2026-03-08T23:05:53.649638+0000 mgr.y (mgr.24419) 121 : cluster [DBG] pgmap v72: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:55 vm06 bash[27746]: cluster 2026-03-08T23:05:53.649638+0000 mgr.y (mgr.24419) 121 : cluster [DBG] pgmap v72: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:55.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:55 vm11 bash[23232]: cluster 2026-03-08T23:05:53.649638+0000 mgr.y (mgr.24419) 121 : cluster [DBG] pgmap v72: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:55 vm11 bash[23232]: cluster 2026-03-08T23:05:53.649638+0000 mgr.y (mgr.24419) 121 : cluster [DBG] pgmap v72: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:56 vm06 bash[20625]: cluster 2026-03-08T23:05:55.650151+0000 mgr.y (mgr.24419) 122 : cluster [DBG] pgmap v73: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:56 vm06 bash[20625]: cluster 2026-03-08T23:05:55.650151+0000 mgr.y (mgr.24419) 122 : cluster [DBG] pgmap v73: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:56 vm06 bash[27746]: cluster 2026-03-08T23:05:55.650151+0000 mgr.y (mgr.24419) 122 : cluster [DBG] pgmap v73: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:56 vm06 bash[27746]: cluster 2026-03-08T23:05:55.650151+0000 mgr.y (mgr.24419) 122 : cluster [DBG] pgmap v73: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:56.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:56 vm11 bash[23232]: cluster 2026-03-08T23:05:55.650151+0000 mgr.y (mgr.24419) 122 : cluster [DBG] pgmap v73: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:56 vm11 bash[23232]: cluster 2026-03-08T23:05:55.650151+0000 mgr.y (mgr.24419) 122 : cluster [DBG] pgmap v73: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:05:56.796 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:05:56.995 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:05:56.995 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:05:56.995 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:05:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:57 vm06 bash[20625]: audit 2026-03-08T23:05:56.985695+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.106:0/1346463602' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:57 vm06 bash[20625]: audit 2026-03-08T23:05:56.985695+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.106:0/1346463602' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:57.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:57 vm06 bash[27746]: audit 2026-03-08T23:05:56.985695+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.106:0/1346463602' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:57.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:57 vm06 bash[27746]: audit 2026-03-08T23:05:56.985695+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.106:0/1346463602' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:57.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:57 vm11 bash[23232]: audit 2026-03-08T23:05:56.985695+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.106:0/1346463602' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:57.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:57 vm11 bash[23232]: audit 2026-03-08T23:05:56.985695+0000 mon.c (mon.2) 81 : audit [INF] from='client.? 192.168.123.106:0/1346463602' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:05:58.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:58 vm06 bash[20625]: cluster 2026-03-08T23:05:57.650467+0000 mgr.y (mgr.24419) 123 : cluster [DBG] pgmap v74: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:58.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:05:58 vm06 bash[20625]: cluster 2026-03-08T23:05:57.650467+0000 mgr.y (mgr.24419) 123 : cluster [DBG] pgmap v74: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:58.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:58 vm06 bash[27746]: cluster 2026-03-08T23:05:57.650467+0000 mgr.y (mgr.24419) 123 : cluster [DBG] pgmap v74: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:58.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:05:58 vm06 bash[27746]: cluster 2026-03-08T23:05:57.650467+0000 mgr.y (mgr.24419) 123 : cluster [DBG] pgmap v74: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:58.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:58 vm11 bash[23232]: cluster 2026-03-08T23:05:57.650467+0000 mgr.y (mgr.24419) 123 : cluster [DBG] pgmap v74: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:05:58.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:05:58 vm11 bash[23232]: cluster 2026-03-08T23:05:57.650467+0000 mgr.y (mgr.24419) 123 : cluster [DBG] pgmap v74: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:06:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:06:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:06:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:00 vm06 bash[20625]: cluster 2026-03-08T23:05:59.650742+0000 mgr.y (mgr.24419) 124 : cluster [DBG] pgmap v75: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:00 vm06 bash[20625]: cluster 2026-03-08T23:05:59.650742+0000 mgr.y (mgr.24419) 124 : cluster [DBG] pgmap v75: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:00 vm06 bash[27746]: cluster 2026-03-08T23:05:59.650742+0000 mgr.y (mgr.24419) 124 : cluster [DBG] pgmap v75: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:00 vm06 bash[27746]: cluster 2026-03-08T23:05:59.650742+0000 mgr.y (mgr.24419) 124 : cluster [DBG] pgmap v75: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:00 vm11 bash[23232]: cluster 2026-03-08T23:05:59.650742+0000 mgr.y (mgr.24419) 124 : cluster [DBG] pgmap v75: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:00 vm11 bash[23232]: cluster 2026-03-08T23:05:59.650742+0000 mgr.y (mgr.24419) 124 : cluster [DBG] pgmap v75: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:01.996 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:02.189 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:02.189 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:02.189 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:02.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:06:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:03 vm06 bash[20625]: cluster 2026-03-08T23:06:01.651160+0000 mgr.y (mgr.24419) 125 : cluster [DBG] pgmap v76: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:03 vm06 bash[20625]: cluster 2026-03-08T23:06:01.651160+0000 mgr.y (mgr.24419) 125 : cluster [DBG] pgmap v76: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:03 vm06 bash[20625]: audit 2026-03-08T23:06:02.156192+0000 mgr.y (mgr.24419) 126 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:03 vm06 bash[20625]: audit 2026-03-08T23:06:02.156192+0000 mgr.y (mgr.24419) 126 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:03 vm06 bash[20625]: audit 2026-03-08T23:06:02.181643+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.106:0/3910480449' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:03 vm06 bash[20625]: audit 2026-03-08T23:06:02.181643+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.106:0/3910480449' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:03 vm06 bash[27746]: cluster 2026-03-08T23:06:01.651160+0000 mgr.y (mgr.24419) 125 : cluster [DBG] pgmap v76: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:03 vm06 bash[27746]: cluster 2026-03-08T23:06:01.651160+0000 mgr.y (mgr.24419) 125 : cluster [DBG] pgmap v76: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:03 vm06 bash[27746]: audit 2026-03-08T23:06:02.156192+0000 mgr.y (mgr.24419) 126 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:03 vm06 bash[27746]: audit 2026-03-08T23:06:02.156192+0000 mgr.y (mgr.24419) 126 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:03 vm06 bash[27746]: audit 2026-03-08T23:06:02.181643+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.106:0/3910480449' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:03.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:03 vm06 bash[27746]: audit 2026-03-08T23:06:02.181643+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.106:0/3910480449' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:03.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:03 vm11 bash[23232]: cluster 2026-03-08T23:06:01.651160+0000 mgr.y (mgr.24419) 125 : cluster [DBG] pgmap v76: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:03.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:03 vm11 bash[23232]: cluster 2026-03-08T23:06:01.651160+0000 mgr.y (mgr.24419) 125 : cluster [DBG] pgmap v76: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:03.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:03 vm11 bash[23232]: audit 2026-03-08T23:06:02.156192+0000 mgr.y (mgr.24419) 126 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:03.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:03 vm11 bash[23232]: audit 2026-03-08T23:06:02.156192+0000 mgr.y (mgr.24419) 126 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:03.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:03 vm11 bash[23232]: audit 2026-03-08T23:06:02.181643+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.106:0/3910480449' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:03.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:03 vm11 bash[23232]: audit 2026-03-08T23:06:02.181643+0000 mon.a (mon.0) 823 : audit [INF] from='client.? 192.168.123.106:0/3910480449' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:05 vm06 bash[20625]: cluster 2026-03-08T23:06:03.651386+0000 mgr.y (mgr.24419) 127 : cluster [DBG] pgmap v77: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:05.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:05 vm06 bash[20625]: cluster 2026-03-08T23:06:03.651386+0000 mgr.y (mgr.24419) 127 : cluster [DBG] pgmap v77: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:05.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:05 vm06 bash[27746]: cluster 2026-03-08T23:06:03.651386+0000 mgr.y (mgr.24419) 127 : cluster [DBG] pgmap v77: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:05.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:05 vm06 bash[27746]: cluster 2026-03-08T23:06:03.651386+0000 mgr.y (mgr.24419) 127 : cluster [DBG] pgmap v77: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:05 vm11 bash[23232]: cluster 2026-03-08T23:06:03.651386+0000 mgr.y (mgr.24419) 127 : cluster [DBG] pgmap v77: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:05.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:05 vm11 bash[23232]: cluster 2026-03-08T23:06:03.651386+0000 mgr.y (mgr.24419) 127 : cluster [DBG] pgmap v77: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:07.191 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:07.392 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:07.392 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:07.392 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:07.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:07 vm06 bash[20625]: cluster 2026-03-08T23:06:05.651884+0000 mgr.y (mgr.24419) 128 : cluster [DBG] pgmap v78: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:07.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:07 vm06 bash[20625]: cluster 2026-03-08T23:06:05.651884+0000 mgr.y (mgr.24419) 128 : cluster [DBG] pgmap v78: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:07.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:07 vm06 bash[27746]: cluster 2026-03-08T23:06:05.651884+0000 mgr.y (mgr.24419) 128 : cluster [DBG] pgmap v78: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:07.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:07 vm06 bash[27746]: cluster 2026-03-08T23:06:05.651884+0000 mgr.y (mgr.24419) 128 : cluster [DBG] pgmap v78: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:07 vm11 bash[23232]: cluster 2026-03-08T23:06:05.651884+0000 mgr.y (mgr.24419) 128 : cluster [DBG] pgmap v78: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:07.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:07 vm11 bash[23232]: cluster 2026-03-08T23:06:05.651884+0000 mgr.y (mgr.24419) 128 : cluster [DBG] pgmap v78: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:08 vm06 bash[20625]: audit 2026-03-08T23:06:07.383153+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.106:0/880719346' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:08 vm06 bash[20625]: audit 2026-03-08T23:06:07.383153+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.106:0/880719346' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:08 vm06 bash[20625]: audit 2026-03-08T23:06:07.793735+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:08 vm06 bash[20625]: audit 2026-03-08T23:06:07.793735+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:08 vm06 bash[27746]: audit 2026-03-08T23:06:07.383153+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.106:0/880719346' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:08 vm06 bash[27746]: audit 2026-03-08T23:06:07.383153+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.106:0/880719346' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:08 vm06 bash[27746]: audit 2026-03-08T23:06:07.793735+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:08.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:08 vm06 bash[27746]: audit 2026-03-08T23:06:07.793735+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:08 vm11 bash[23232]: audit 2026-03-08T23:06:07.383153+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.106:0/880719346' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:08.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:08 vm11 bash[23232]: audit 2026-03-08T23:06:07.383153+0000 mon.c (mon.2) 82 : audit [INF] from='client.? 192.168.123.106:0/880719346' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:08 vm11 bash[23232]: audit 2026-03-08T23:06:07.793735+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:08.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:08 vm11 bash[23232]: audit 2026-03-08T23:06:07.793735+0000 mon.c (mon.2) 83 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:09 vm06 bash[20625]: cluster 2026-03-08T23:06:07.652179+0000 mgr.y (mgr.24419) 129 : cluster [DBG] pgmap v79: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:09 vm06 bash[20625]: cluster 2026-03-08T23:06:07.652179+0000 mgr.y (mgr.24419) 129 : cluster [DBG] pgmap v79: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:09 vm06 bash[27746]: cluster 2026-03-08T23:06:07.652179+0000 mgr.y (mgr.24419) 129 : cluster [DBG] pgmap v79: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:09 vm06 bash[27746]: cluster 2026-03-08T23:06:07.652179+0000 mgr.y (mgr.24419) 129 : cluster [DBG] pgmap v79: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:09 vm11 bash[23232]: cluster 2026-03-08T23:06:07.652179+0000 mgr.y (mgr.24419) 129 : cluster [DBG] pgmap v79: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:09.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:09 vm11 bash[23232]: cluster 2026-03-08T23:06:07.652179+0000 mgr.y (mgr.24419) 129 : cluster [DBG] pgmap v79: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:06:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:06:10] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:06:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:11 vm06 bash[20625]: cluster 2026-03-08T23:06:09.652409+0000 mgr.y (mgr.24419) 130 : cluster [DBG] pgmap v80: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:11 vm06 bash[20625]: cluster 2026-03-08T23:06:09.652409+0000 mgr.y (mgr.24419) 130 : cluster [DBG] pgmap v80: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:11 vm06 bash[27746]: cluster 2026-03-08T23:06:09.652409+0000 mgr.y (mgr.24419) 130 : cluster [DBG] pgmap v80: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:11 vm06 bash[27746]: cluster 2026-03-08T23:06:09.652409+0000 mgr.y (mgr.24419) 130 : cluster [DBG] pgmap v80: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:11 vm11 bash[23232]: cluster 2026-03-08T23:06:09.652409+0000 mgr.y (mgr.24419) 130 : cluster [DBG] pgmap v80: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:11.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:11 vm11 bash[23232]: cluster 2026-03-08T23:06:09.652409+0000 mgr.y (mgr.24419) 130 : cluster [DBG] pgmap v80: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:12.393 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:12.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:06:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:06:12.595 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:12.595 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:12.595 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:13 vm06 bash[20625]: cluster 2026-03-08T23:06:11.652824+0000 mgr.y (mgr.24419) 131 : cluster [DBG] pgmap v81: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:13 vm06 bash[20625]: cluster 2026-03-08T23:06:11.652824+0000 mgr.y (mgr.24419) 131 : cluster [DBG] pgmap v81: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:13 vm06 bash[20625]: audit 2026-03-08T23:06:12.166871+0000 mgr.y (mgr.24419) 132 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:13 vm06 bash[20625]: audit 2026-03-08T23:06:12.166871+0000 mgr.y (mgr.24419) 132 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:13 vm06 bash[20625]: audit 2026-03-08T23:06:12.586216+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.106:0/3812346290' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:13 vm06 bash[20625]: audit 2026-03-08T23:06:12.586216+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.106:0/3812346290' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:13 vm06 bash[27746]: cluster 2026-03-08T23:06:11.652824+0000 mgr.y (mgr.24419) 131 : cluster [DBG] pgmap v81: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:13 vm06 bash[27746]: cluster 2026-03-08T23:06:11.652824+0000 mgr.y (mgr.24419) 131 : cluster [DBG] pgmap v81: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:13 vm06 bash[27746]: audit 2026-03-08T23:06:12.166871+0000 mgr.y (mgr.24419) 132 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:13 vm06 bash[27746]: audit 2026-03-08T23:06:12.166871+0000 mgr.y (mgr.24419) 132 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:13 vm06 bash[27746]: audit 2026-03-08T23:06:12.586216+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.106:0/3812346290' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:13.813 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:13 vm06 bash[27746]: audit 2026-03-08T23:06:12.586216+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.106:0/3812346290' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:13 vm11 bash[23232]: cluster 2026-03-08T23:06:11.652824+0000 mgr.y (mgr.24419) 131 : cluster [DBG] pgmap v81: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:13 vm11 bash[23232]: cluster 2026-03-08T23:06:11.652824+0000 mgr.y (mgr.24419) 131 : cluster [DBG] pgmap v81: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:13 vm11 bash[23232]: audit 2026-03-08T23:06:12.166871+0000 mgr.y (mgr.24419) 132 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:13 vm11 bash[23232]: audit 2026-03-08T23:06:12.166871+0000 mgr.y (mgr.24419) 132 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:13 vm11 bash[23232]: audit 2026-03-08T23:06:12.586216+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.106:0/3812346290' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:13 vm11 bash[23232]: audit 2026-03-08T23:06:12.586216+0000 mon.c (mon.2) 84 : audit [INF] from='client.? 192.168.123.106:0/3812346290' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:14 vm06 bash[20625]: cluster 2026-03-08T23:06:13.653070+0000 mgr.y (mgr.24419) 133 : cluster [DBG] pgmap v82: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:15.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:14 vm06 bash[20625]: cluster 2026-03-08T23:06:13.653070+0000 mgr.y (mgr.24419) 133 : cluster [DBG] pgmap v82: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:14 vm06 bash[27746]: cluster 2026-03-08T23:06:13.653070+0000 mgr.y (mgr.24419) 133 : cluster [DBG] pgmap v82: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:15.030 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:14 vm06 bash[27746]: cluster 2026-03-08T23:06:13.653070+0000 mgr.y (mgr.24419) 133 : cluster [DBG] pgmap v82: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:15.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:14 vm11 bash[23232]: cluster 2026-03-08T23:06:13.653070+0000 mgr.y (mgr.24419) 133 : cluster [DBG] pgmap v82: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:15.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:14 vm11 bash[23232]: cluster 2026-03-08T23:06:13.653070+0000 mgr.y (mgr.24419) 133 : cluster [DBG] pgmap v82: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:17 vm06 bash[20625]: cluster 2026-03-08T23:06:15.653524+0000 mgr.y (mgr.24419) 134 : cluster [DBG] pgmap v83: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:17 vm06 bash[20625]: cluster 2026-03-08T23:06:15.653524+0000 mgr.y (mgr.24419) 134 : cluster [DBG] pgmap v83: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:17 vm06 bash[27746]: cluster 2026-03-08T23:06:15.653524+0000 mgr.y (mgr.24419) 134 : cluster [DBG] pgmap v83: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:17 vm06 bash[27746]: cluster 2026-03-08T23:06:15.653524+0000 mgr.y (mgr.24419) 134 : cluster [DBG] pgmap v83: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:17.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:17 vm11 bash[23232]: cluster 2026-03-08T23:06:15.653524+0000 mgr.y (mgr.24419) 134 : cluster [DBG] pgmap v83: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:17.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:17 vm11 bash[23232]: cluster 2026-03-08T23:06:15.653524+0000 mgr.y (mgr.24419) 134 : cluster [DBG] pgmap v83: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:17.597 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:17.816 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:17.816 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:17.816 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:18 vm06 bash[20625]: cluster 2026-03-08T23:06:17.653813+0000 mgr.y (mgr.24419) 135 : cluster [DBG] pgmap v84: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:18 vm06 bash[20625]: cluster 2026-03-08T23:06:17.653813+0000 mgr.y (mgr.24419) 135 : cluster [DBG] pgmap v84: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:18 vm06 bash[20625]: audit 2026-03-08T23:06:17.806521+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.106:0/2001833987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:18 vm06 bash[20625]: audit 2026-03-08T23:06:17.806521+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.106:0/2001833987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:18 vm06 bash[27746]: cluster 2026-03-08T23:06:17.653813+0000 mgr.y (mgr.24419) 135 : cluster [DBG] pgmap v84: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:18 vm06 bash[27746]: cluster 2026-03-08T23:06:17.653813+0000 mgr.y (mgr.24419) 135 : cluster [DBG] pgmap v84: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:18 vm06 bash[27746]: audit 2026-03-08T23:06:17.806521+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.106:0/2001833987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:18.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:18 vm06 bash[27746]: audit 2026-03-08T23:06:17.806521+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.106:0/2001833987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:18 vm11 bash[23232]: cluster 2026-03-08T23:06:17.653813+0000 mgr.y (mgr.24419) 135 : cluster [DBG] pgmap v84: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:18 vm11 bash[23232]: cluster 2026-03-08T23:06:17.653813+0000 mgr.y (mgr.24419) 135 : cluster [DBG] pgmap v84: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:18 vm11 bash[23232]: audit 2026-03-08T23:06:17.806521+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.106:0/2001833987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:18.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:18 vm11 bash[23232]: audit 2026-03-08T23:06:17.806521+0000 mon.a (mon.0) 824 : audit [INF] from='client.? 192.168.123.106:0/2001833987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:20 vm06 bash[20625]: cluster 2026-03-08T23:06:19.654035+0000 mgr.y (mgr.24419) 136 : cluster [DBG] pgmap v85: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:20 vm06 bash[20625]: cluster 2026-03-08T23:06:19.654035+0000 mgr.y (mgr.24419) 136 : cluster [DBG] pgmap v85: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:06:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:06:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:06:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:20 vm06 bash[27746]: cluster 2026-03-08T23:06:19.654035+0000 mgr.y (mgr.24419) 136 : cluster [DBG] pgmap v85: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:20 vm06 bash[27746]: cluster 2026-03-08T23:06:19.654035+0000 mgr.y (mgr.24419) 136 : cluster [DBG] pgmap v85: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:20 vm11 bash[23232]: cluster 2026-03-08T23:06:19.654035+0000 mgr.y (mgr.24419) 136 : cluster [DBG] pgmap v85: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:21.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:20 vm11 bash[23232]: cluster 2026-03-08T23:06:19.654035+0000 mgr.y (mgr.24419) 136 : cluster [DBG] pgmap v85: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:22.557 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:06:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:06:22.817 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:23.009 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:23.009 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:23.009 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:22 vm06 bash[20625]: cluster 2026-03-08T23:06:21.654428+0000 mgr.y (mgr.24419) 137 : cluster [DBG] pgmap v86: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:22 vm06 bash[20625]: cluster 2026-03-08T23:06:21.654428+0000 mgr.y (mgr.24419) 137 : cluster [DBG] pgmap v86: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:22 vm06 bash[20625]: audit 2026-03-08T23:06:22.172504+0000 mgr.y (mgr.24419) 138 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:22 vm06 bash[20625]: audit 2026-03-08T23:06:22.172504+0000 mgr.y (mgr.24419) 138 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:22 vm06 bash[27746]: cluster 2026-03-08T23:06:21.654428+0000 mgr.y (mgr.24419) 137 : cluster [DBG] pgmap v86: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:22 vm06 bash[27746]: cluster 2026-03-08T23:06:21.654428+0000 mgr.y (mgr.24419) 137 : cluster [DBG] pgmap v86: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:22 vm06 bash[27746]: audit 2026-03-08T23:06:22.172504+0000 mgr.y (mgr.24419) 138 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:22 vm06 bash[27746]: audit 2026-03-08T23:06:22.172504+0000 mgr.y (mgr.24419) 138 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:22 vm11 bash[23232]: cluster 2026-03-08T23:06:21.654428+0000 mgr.y (mgr.24419) 137 : cluster [DBG] pgmap v86: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:22 vm11 bash[23232]: cluster 2026-03-08T23:06:21.654428+0000 mgr.y (mgr.24419) 137 : cluster [DBG] pgmap v86: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:22 vm11 bash[23232]: audit 2026-03-08T23:06:22.172504+0000 mgr.y (mgr.24419) 138 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:22 vm11 bash[23232]: audit 2026-03-08T23:06:22.172504+0000 mgr.y (mgr.24419) 138 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:23 vm06 bash[20625]: audit 2026-03-08T23:06:22.800018+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:23 vm06 bash[20625]: audit 2026-03-08T23:06:22.800018+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:23 vm06 bash[20625]: audit 2026-03-08T23:06:22.999769+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.106:0/77395224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:23 vm06 bash[20625]: audit 2026-03-08T23:06:22.999769+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.106:0/77395224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:23 vm06 bash[27746]: audit 2026-03-08T23:06:22.800018+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:23 vm06 bash[27746]: audit 2026-03-08T23:06:22.800018+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:23 vm06 bash[27746]: audit 2026-03-08T23:06:22.999769+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.106:0/77395224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:23 vm06 bash[27746]: audit 2026-03-08T23:06:22.999769+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.106:0/77395224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:24.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:23 vm11 bash[23232]: audit 2026-03-08T23:06:22.800018+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:24.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:23 vm11 bash[23232]: audit 2026-03-08T23:06:22.800018+0000 mon.c (mon.2) 85 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:24.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:23 vm11 bash[23232]: audit 2026-03-08T23:06:22.999769+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.106:0/77395224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:24.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:23 vm11 bash[23232]: audit 2026-03-08T23:06:22.999769+0000 mon.c (mon.2) 86 : audit [INF] from='client.? 192.168.123.106:0/77395224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:24 vm06 bash[20625]: cluster 2026-03-08T23:06:23.654659+0000 mgr.y (mgr.24419) 139 : cluster [DBG] pgmap v87: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:24 vm06 bash[20625]: cluster 2026-03-08T23:06:23.654659+0000 mgr.y (mgr.24419) 139 : cluster [DBG] pgmap v87: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:24 vm06 bash[27746]: cluster 2026-03-08T23:06:23.654659+0000 mgr.y (mgr.24419) 139 : cluster [DBG] pgmap v87: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:24 vm06 bash[27746]: cluster 2026-03-08T23:06:23.654659+0000 mgr.y (mgr.24419) 139 : cluster [DBG] pgmap v87: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:25.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:24 vm11 bash[23232]: cluster 2026-03-08T23:06:23.654659+0000 mgr.y (mgr.24419) 139 : cluster [DBG] pgmap v87: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:25.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:24 vm11 bash[23232]: cluster 2026-03-08T23:06:23.654659+0000 mgr.y (mgr.24419) 139 : cluster [DBG] pgmap v87: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:26 vm06 bash[20625]: cluster 2026-03-08T23:06:25.655031+0000 mgr.y (mgr.24419) 140 : cluster [DBG] pgmap v88: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:26 vm06 bash[20625]: cluster 2026-03-08T23:06:25.655031+0000 mgr.y (mgr.24419) 140 : cluster [DBG] pgmap v88: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:26 vm06 bash[27746]: cluster 2026-03-08T23:06:25.655031+0000 mgr.y (mgr.24419) 140 : cluster [DBG] pgmap v88: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:26 vm06 bash[27746]: cluster 2026-03-08T23:06:25.655031+0000 mgr.y (mgr.24419) 140 : cluster [DBG] pgmap v88: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:27.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:26 vm11 bash[23232]: cluster 2026-03-08T23:06:25.655031+0000 mgr.y (mgr.24419) 140 : cluster [DBG] pgmap v88: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:27.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:26 vm11 bash[23232]: cluster 2026-03-08T23:06:25.655031+0000 mgr.y (mgr.24419) 140 : cluster [DBG] pgmap v88: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:28.010 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:28.206 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:28.206 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:28.206 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:28 vm06 bash[20625]: cluster 2026-03-08T23:06:27.655337+0000 mgr.y (mgr.24419) 141 : cluster [DBG] pgmap v89: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:28 vm06 bash[20625]: cluster 2026-03-08T23:06:27.655337+0000 mgr.y (mgr.24419) 141 : cluster [DBG] pgmap v89: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:28 vm06 bash[20625]: audit 2026-03-08T23:06:28.196421+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.106:0/4046117911' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:28 vm06 bash[20625]: audit 2026-03-08T23:06:28.196421+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.106:0/4046117911' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:28 vm06 bash[27746]: cluster 2026-03-08T23:06:27.655337+0000 mgr.y (mgr.24419) 141 : cluster [DBG] pgmap v89: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:28 vm06 bash[27746]: cluster 2026-03-08T23:06:27.655337+0000 mgr.y (mgr.24419) 141 : cluster [DBG] pgmap v89: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:28 vm06 bash[27746]: audit 2026-03-08T23:06:28.196421+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.106:0/4046117911' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:28 vm06 bash[27746]: audit 2026-03-08T23:06:28.196421+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.106:0/4046117911' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:28 vm11 bash[23232]: cluster 2026-03-08T23:06:27.655337+0000 mgr.y (mgr.24419) 141 : cluster [DBG] pgmap v89: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:28 vm11 bash[23232]: cluster 2026-03-08T23:06:27.655337+0000 mgr.y (mgr.24419) 141 : cluster [DBG] pgmap v89: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:28 vm11 bash[23232]: audit 2026-03-08T23:06:28.196421+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.106:0/4046117911' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:28 vm11 bash[23232]: audit 2026-03-08T23:06:28.196421+0000 mon.c (mon.2) 87 : audit [INF] from='client.? 192.168.123.106:0/4046117911' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:30 vm06 bash[20625]: cluster 2026-03-08T23:06:29.655578+0000 mgr.y (mgr.24419) 142 : cluster [DBG] pgmap v90: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:30 vm06 bash[20625]: cluster 2026-03-08T23:06:29.655578+0000 mgr.y (mgr.24419) 142 : cluster [DBG] pgmap v90: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:31.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:06:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:06:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:06:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:30 vm06 bash[27746]: cluster 2026-03-08T23:06:29.655578+0000 mgr.y (mgr.24419) 142 : cluster [DBG] pgmap v90: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:30 vm06 bash[27746]: cluster 2026-03-08T23:06:29.655578+0000 mgr.y (mgr.24419) 142 : cluster [DBG] pgmap v90: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:30 vm11 bash[23232]: cluster 2026-03-08T23:06:29.655578+0000 mgr.y (mgr.24419) 142 : cluster [DBG] pgmap v90: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:31.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:30 vm11 bash[23232]: cluster 2026-03-08T23:06:29.655578+0000 mgr.y (mgr.24419) 142 : cluster [DBG] pgmap v90: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:32.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:06:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:06:33.207 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:32 vm06 bash[20625]: cluster 2026-03-08T23:06:31.656045+0000 mgr.y (mgr.24419) 143 : cluster [DBG] pgmap v91: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:32 vm06 bash[20625]: cluster 2026-03-08T23:06:31.656045+0000 mgr.y (mgr.24419) 143 : cluster [DBG] pgmap v91: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:32 vm06 bash[20625]: audit 2026-03-08T23:06:32.183089+0000 mgr.y (mgr.24419) 144 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:32 vm06 bash[20625]: audit 2026-03-08T23:06:32.183089+0000 mgr.y (mgr.24419) 144 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:32 vm06 bash[27746]: cluster 2026-03-08T23:06:31.656045+0000 mgr.y (mgr.24419) 143 : cluster [DBG] pgmap v91: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:32 vm06 bash[27746]: cluster 2026-03-08T23:06:31.656045+0000 mgr.y (mgr.24419) 143 : cluster [DBG] pgmap v91: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:32 vm06 bash[27746]: audit 2026-03-08T23:06:32.183089+0000 mgr.y (mgr.24419) 144 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:32 vm06 bash[27746]: audit 2026-03-08T23:06:32.183089+0000 mgr.y (mgr.24419) 144 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:32 vm11 bash[23232]: cluster 2026-03-08T23:06:31.656045+0000 mgr.y (mgr.24419) 143 : cluster [DBG] pgmap v91: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:32 vm11 bash[23232]: cluster 2026-03-08T23:06:31.656045+0000 mgr.y (mgr.24419) 143 : cluster [DBG] pgmap v91: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:32 vm11 bash[23232]: audit 2026-03-08T23:06:32.183089+0000 mgr.y (mgr.24419) 144 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:33.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:32 vm11 bash[23232]: audit 2026-03-08T23:06:32.183089+0000 mgr.y (mgr.24419) 144 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:33.408 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:33.408 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:33.408 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:34.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:33 vm06 bash[20625]: audit 2026-03-08T23:06:33.398908+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.106:0/745705370' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:34.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:33 vm06 bash[20625]: audit 2026-03-08T23:06:33.398908+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.106:0/745705370' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:34.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:33 vm06 bash[27746]: audit 2026-03-08T23:06:33.398908+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.106:0/745705370' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:34.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:33 vm06 bash[27746]: audit 2026-03-08T23:06:33.398908+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.106:0/745705370' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:34.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:33 vm11 bash[23232]: audit 2026-03-08T23:06:33.398908+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.106:0/745705370' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:33 vm11 bash[23232]: audit 2026-03-08T23:06:33.398908+0000 mon.a (mon.0) 825 : audit [INF] from='client.? 192.168.123.106:0/745705370' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:34 vm06 bash[20625]: cluster 2026-03-08T23:06:33.656290+0000 mgr.y (mgr.24419) 145 : cluster [DBG] pgmap v92: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:34 vm06 bash[20625]: cluster 2026-03-08T23:06:33.656290+0000 mgr.y (mgr.24419) 145 : cluster [DBG] pgmap v92: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:34 vm06 bash[27746]: cluster 2026-03-08T23:06:33.656290+0000 mgr.y (mgr.24419) 145 : cluster [DBG] pgmap v92: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:34 vm06 bash[27746]: cluster 2026-03-08T23:06:33.656290+0000 mgr.y (mgr.24419) 145 : cluster [DBG] pgmap v92: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:35.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:34 vm11 bash[23232]: cluster 2026-03-08T23:06:33.656290+0000 mgr.y (mgr.24419) 145 : cluster [DBG] pgmap v92: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:35.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:34 vm11 bash[23232]: cluster 2026-03-08T23:06:33.656290+0000 mgr.y (mgr.24419) 145 : cluster [DBG] pgmap v92: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:36 vm06 bash[20625]: cluster 2026-03-08T23:06:35.656703+0000 mgr.y (mgr.24419) 146 : cluster [DBG] pgmap v93: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:36 vm06 bash[20625]: cluster 2026-03-08T23:06:35.656703+0000 mgr.y (mgr.24419) 146 : cluster [DBG] pgmap v93: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:36 vm06 bash[27746]: cluster 2026-03-08T23:06:35.656703+0000 mgr.y (mgr.24419) 146 : cluster [DBG] pgmap v93: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:36 vm06 bash[27746]: cluster 2026-03-08T23:06:35.656703+0000 mgr.y (mgr.24419) 146 : cluster [DBG] pgmap v93: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:37.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:36 vm11 bash[23232]: cluster 2026-03-08T23:06:35.656703+0000 mgr.y (mgr.24419) 146 : cluster [DBG] pgmap v93: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:37.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:36 vm11 bash[23232]: cluster 2026-03-08T23:06:35.656703+0000 mgr.y (mgr.24419) 146 : cluster [DBG] pgmap v93: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:37 vm06 bash[20625]: audit 2026-03-08T23:06:37.806207+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:37 vm06 bash[20625]: audit 2026-03-08T23:06:37.806207+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:37 vm06 bash[27746]: audit 2026-03-08T23:06:37.806207+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:37 vm06 bash[27746]: audit 2026-03-08T23:06:37.806207+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:38.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:37 vm11 bash[23232]: audit 2026-03-08T23:06:37.806207+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:38.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:37 vm11 bash[23232]: audit 2026-03-08T23:06:37.806207+0000 mon.c (mon.2) 88 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:38.410 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:38.605 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:38.605 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:38.605 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:38 vm06 bash[20625]: cluster 2026-03-08T23:06:37.656939+0000 mgr.y (mgr.24419) 147 : cluster [DBG] pgmap v94: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:38 vm06 bash[20625]: cluster 2026-03-08T23:06:37.656939+0000 mgr.y (mgr.24419) 147 : cluster [DBG] pgmap v94: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:38 vm06 bash[20625]: audit 2026-03-08T23:06:38.596528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.106:0/3964868987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:38 vm06 bash[20625]: audit 2026-03-08T23:06:38.596528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.106:0/3964868987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:38 vm06 bash[27746]: cluster 2026-03-08T23:06:37.656939+0000 mgr.y (mgr.24419) 147 : cluster [DBG] pgmap v94: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:38 vm06 bash[27746]: cluster 2026-03-08T23:06:37.656939+0000 mgr.y (mgr.24419) 147 : cluster [DBG] pgmap v94: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:38 vm06 bash[27746]: audit 2026-03-08T23:06:38.596528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.106:0/3964868987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:38 vm06 bash[27746]: audit 2026-03-08T23:06:38.596528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.106:0/3964868987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:39.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:38 vm11 bash[23232]: cluster 2026-03-08T23:06:37.656939+0000 mgr.y (mgr.24419) 147 : cluster [DBG] pgmap v94: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:38 vm11 bash[23232]: cluster 2026-03-08T23:06:37.656939+0000 mgr.y (mgr.24419) 147 : cluster [DBG] pgmap v94: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:38 vm11 bash[23232]: audit 2026-03-08T23:06:38.596528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.106:0/3964868987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:38 vm11 bash[23232]: audit 2026-03-08T23:06:38.596528+0000 mon.a (mon.0) 826 : audit [INF] from='client.? 192.168.123.106:0/3964868987' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:40 vm06 bash[20625]: cluster 2026-03-08T23:06:39.657201+0000 mgr.y (mgr.24419) 148 : cluster [DBG] pgmap v95: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:40 vm06 bash[20625]: cluster 2026-03-08T23:06:39.657201+0000 mgr.y (mgr.24419) 148 : cluster [DBG] pgmap v95: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:06:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:06:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:06:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:40 vm06 bash[27746]: cluster 2026-03-08T23:06:39.657201+0000 mgr.y (mgr.24419) 148 : cluster [DBG] pgmap v95: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:40 vm06 bash[27746]: cluster 2026-03-08T23:06:39.657201+0000 mgr.y (mgr.24419) 148 : cluster [DBG] pgmap v95: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:40 vm11 bash[23232]: cluster 2026-03-08T23:06:39.657201+0000 mgr.y (mgr.24419) 148 : cluster [DBG] pgmap v95: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:40 vm11 bash[23232]: cluster 2026-03-08T23:06:39.657201+0000 mgr.y (mgr.24419) 148 : cluster [DBG] pgmap v95: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:42.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:06:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:06:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:42 vm06 bash[20625]: cluster 2026-03-08T23:06:41.657639+0000 mgr.y (mgr.24419) 149 : cluster [DBG] pgmap v96: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:42 vm06 bash[20625]: cluster 2026-03-08T23:06:41.657639+0000 mgr.y (mgr.24419) 149 : cluster [DBG] pgmap v96: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:42 vm06 bash[20625]: audit 2026-03-08T23:06:42.191214+0000 mgr.y (mgr.24419) 150 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:42 vm06 bash[20625]: audit 2026-03-08T23:06:42.191214+0000 mgr.y (mgr.24419) 150 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:42 vm06 bash[27746]: cluster 2026-03-08T23:06:41.657639+0000 mgr.y (mgr.24419) 149 : cluster [DBG] pgmap v96: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:42 vm06 bash[27746]: cluster 2026-03-08T23:06:41.657639+0000 mgr.y (mgr.24419) 149 : cluster [DBG] pgmap v96: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:43.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:42 vm06 bash[27746]: audit 2026-03-08T23:06:42.191214+0000 mgr.y (mgr.24419) 150 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:43.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:42 vm06 bash[27746]: audit 2026-03-08T23:06:42.191214+0000 mgr.y (mgr.24419) 150 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:42 vm11 bash[23232]: cluster 2026-03-08T23:06:41.657639+0000 mgr.y (mgr.24419) 149 : cluster [DBG] pgmap v96: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:42 vm11 bash[23232]: cluster 2026-03-08T23:06:41.657639+0000 mgr.y (mgr.24419) 149 : cluster [DBG] pgmap v96: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:42 vm11 bash[23232]: audit 2026-03-08T23:06:42.191214+0000 mgr.y (mgr.24419) 150 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:42 vm11 bash[23232]: audit 2026-03-08T23:06:42.191214+0000 mgr.y (mgr.24419) 150 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:43.607 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:43.799 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:43.799 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:43.799 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:44.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:43 vm06 bash[20625]: audit 2026-03-08T23:06:43.790060+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.106:0/3679103818' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:44.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:43 vm06 bash[20625]: audit 2026-03-08T23:06:43.790060+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.106:0/3679103818' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:44.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:43 vm06 bash[27746]: audit 2026-03-08T23:06:43.790060+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.106:0/3679103818' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:44.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:43 vm06 bash[27746]: audit 2026-03-08T23:06:43.790060+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.106:0/3679103818' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:44.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:43 vm11 bash[23232]: audit 2026-03-08T23:06:43.790060+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.106:0/3679103818' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:44.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:43 vm11 bash[23232]: audit 2026-03-08T23:06:43.790060+0000 mon.c (mon.2) 89 : audit [INF] from='client.? 192.168.123.106:0/3679103818' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:45.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:44 vm06 bash[20625]: cluster 2026-03-08T23:06:43.657959+0000 mgr.y (mgr.24419) 151 : cluster [DBG] pgmap v97: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:45.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:44 vm06 bash[20625]: cluster 2026-03-08T23:06:43.657959+0000 mgr.y (mgr.24419) 151 : cluster [DBG] pgmap v97: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:45.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:44 vm06 bash[27746]: cluster 2026-03-08T23:06:43.657959+0000 mgr.y (mgr.24419) 151 : cluster [DBG] pgmap v97: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:45.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:44 vm06 bash[27746]: cluster 2026-03-08T23:06:43.657959+0000 mgr.y (mgr.24419) 151 : cluster [DBG] pgmap v97: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:45.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:44 vm11 bash[23232]: cluster 2026-03-08T23:06:43.657959+0000 mgr.y (mgr.24419) 151 : cluster [DBG] pgmap v97: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:45.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:44 vm11 bash[23232]: cluster 2026-03-08T23:06:43.657959+0000 mgr.y (mgr.24419) 151 : cluster [DBG] pgmap v97: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:46 vm06 bash[20625]: cluster 2026-03-08T23:06:45.658666+0000 mgr.y (mgr.24419) 152 : cluster [DBG] pgmap v98: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:46 vm06 bash[20625]: cluster 2026-03-08T23:06:45.658666+0000 mgr.y (mgr.24419) 152 : cluster [DBG] pgmap v98: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:46 vm06 bash[27746]: cluster 2026-03-08T23:06:45.658666+0000 mgr.y (mgr.24419) 152 : cluster [DBG] pgmap v98: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:46 vm06 bash[27746]: cluster 2026-03-08T23:06:45.658666+0000 mgr.y (mgr.24419) 152 : cluster [DBG] pgmap v98: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:47.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:46 vm11 bash[23232]: cluster 2026-03-08T23:06:45.658666+0000 mgr.y (mgr.24419) 152 : cluster [DBG] pgmap v98: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:47.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:46 vm11 bash[23232]: cluster 2026-03-08T23:06:45.658666+0000 mgr.y (mgr.24419) 152 : cluster [DBG] pgmap v98: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:48.801 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:48.991 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== 2026-03-08T23:06:48.991 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== ']' 2026-03-08T23:06:48.992 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:49.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:48 vm06 bash[20625]: cluster 2026-03-08T23:06:47.659098+0000 mgr.y (mgr.24419) 153 : cluster [DBG] pgmap v99: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:49.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:48 vm06 bash[20625]: cluster 2026-03-08T23:06:47.659098+0000 mgr.y (mgr.24419) 153 : cluster [DBG] pgmap v99: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:49.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:48 vm06 bash[27746]: cluster 2026-03-08T23:06:47.659098+0000 mgr.y (mgr.24419) 153 : cluster [DBG] pgmap v99: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:49.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:48 vm06 bash[27746]: cluster 2026-03-08T23:06:47.659098+0000 mgr.y (mgr.24419) 153 : cluster [DBG] pgmap v99: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:49.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:48 vm11 bash[23232]: cluster 2026-03-08T23:06:47.659098+0000 mgr.y (mgr.24419) 153 : cluster [DBG] pgmap v99: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:49.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:48 vm11 bash[23232]: cluster 2026-03-08T23:06:47.659098+0000 mgr.y (mgr.24419) 153 : cluster [DBG] pgmap v99: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:49 vm06 bash[20625]: audit 2026-03-08T23:06:48.980114+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.106:0/2412662674' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:50.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:49 vm06 bash[20625]: audit 2026-03-08T23:06:48.980114+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.106:0/2412662674' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:50.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:49 vm06 bash[27746]: audit 2026-03-08T23:06:48.980114+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.106:0/2412662674' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:50.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:49 vm06 bash[27746]: audit 2026-03-08T23:06:48.980114+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.106:0/2412662674' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:50.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:49 vm11 bash[23232]: audit 2026-03-08T23:06:48.980114+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.106:0/2412662674' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:50.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:49 vm11 bash[23232]: audit 2026-03-08T23:06:48.980114+0000 mon.b (mon.1) 40 : audit [INF] from='client.? 192.168.123.106:0/2412662674' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:50 vm06 bash[20625]: cluster 2026-03-08T23:06:49.659357+0000 mgr.y (mgr.24419) 154 : cluster [DBG] pgmap v100: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:50 vm06 bash[20625]: cluster 2026-03-08T23:06:49.659357+0000 mgr.y (mgr.24419) 154 : cluster [DBG] pgmap v100: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:50 vm06 bash[20625]: audit 2026-03-08T23:06:50.794002+0000 mon.c (mon.2) 90 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:50 vm06 bash[20625]: audit 2026-03-08T23:06:50.794002+0000 mon.c (mon.2) 90 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:06:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:06:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:50 vm06 bash[27746]: cluster 2026-03-08T23:06:49.659357+0000 mgr.y (mgr.24419) 154 : cluster [DBG] pgmap v100: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:50 vm06 bash[27746]: cluster 2026-03-08T23:06:49.659357+0000 mgr.y (mgr.24419) 154 : cluster [DBG] pgmap v100: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:50 vm06 bash[27746]: audit 2026-03-08T23:06:50.794002+0000 mon.c (mon.2) 90 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:50 vm06 bash[27746]: audit 2026-03-08T23:06:50.794002+0000 mon.c (mon.2) 90 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:51.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:50 vm11 bash[23232]: cluster 2026-03-08T23:06:49.659357+0000 mgr.y (mgr.24419) 154 : cluster [DBG] pgmap v100: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:51.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:50 vm11 bash[23232]: cluster 2026-03-08T23:06:49.659357+0000 mgr.y (mgr.24419) 154 : cluster [DBG] pgmap v100: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:51.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:50 vm11 bash[23232]: audit 2026-03-08T23:06:50.794002+0000 mon.c (mon.2) 90 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:51.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:50 vm11 bash[23232]: audit 2026-03-08T23:06:50.794002+0000 mon.c (mon.2) 90 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:51 vm06 bash[20625]: audit 2026-03-08T23:06:51.116549+0000 mon.c (mon.2) 91 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:51 vm06 bash[20625]: audit 2026-03-08T23:06:51.116549+0000 mon.c (mon.2) 91 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:51 vm06 bash[20625]: audit 2026-03-08T23:06:51.117723+0000 mon.c (mon.2) 92 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:51 vm06 bash[20625]: audit 2026-03-08T23:06:51.117723+0000 mon.c (mon.2) 92 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:51 vm06 bash[20625]: audit 2026-03-08T23:06:51.123166+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:51 vm06 bash[20625]: audit 2026-03-08T23:06:51.123166+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:51 vm06 bash[27746]: audit 2026-03-08T23:06:51.116549+0000 mon.c (mon.2) 91 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:51 vm06 bash[27746]: audit 2026-03-08T23:06:51.116549+0000 mon.c (mon.2) 91 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:51 vm06 bash[27746]: audit 2026-03-08T23:06:51.117723+0000 mon.c (mon.2) 92 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:51 vm06 bash[27746]: audit 2026-03-08T23:06:51.117723+0000 mon.c (mon.2) 92 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:51 vm06 bash[27746]: audit 2026-03-08T23:06:51.123166+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:52.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:51 vm06 bash[27746]: audit 2026-03-08T23:06:51.123166+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:52 vm11 bash[23232]: audit 2026-03-08T23:06:51.116549+0000 mon.c (mon.2) 91 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:52 vm11 bash[23232]: audit 2026-03-08T23:06:51.116549+0000 mon.c (mon.2) 91 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:52 vm11 bash[23232]: audit 2026-03-08T23:06:51.117723+0000 mon.c (mon.2) 92 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:52 vm11 bash[23232]: audit 2026-03-08T23:06:51.117723+0000 mon.c (mon.2) 92 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:52 vm11 bash[23232]: audit 2026-03-08T23:06:51.123166+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:52.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:52 vm11 bash[23232]: audit 2026-03-08T23:06:51.123166+0000 mon.a (mon.0) 827 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:52.308 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:06:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:53 vm06 bash[20625]: cluster 2026-03-08T23:06:51.659851+0000 mgr.y (mgr.24419) 155 : cluster [DBG] pgmap v101: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:53 vm06 bash[20625]: cluster 2026-03-08T23:06:51.659851+0000 mgr.y (mgr.24419) 155 : cluster [DBG] pgmap v101: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:53 vm06 bash[20625]: audit 2026-03-08T23:06:52.812899+0000 mon.c (mon.2) 93 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:53 vm06 bash[20625]: audit 2026-03-08T23:06:52.812899+0000 mon.c (mon.2) 93 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:53 vm06 bash[27746]: cluster 2026-03-08T23:06:51.659851+0000 mgr.y (mgr.24419) 155 : cluster [DBG] pgmap v101: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:53 vm06 bash[27746]: cluster 2026-03-08T23:06:51.659851+0000 mgr.y (mgr.24419) 155 : cluster [DBG] pgmap v101: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:53 vm06 bash[27746]: audit 2026-03-08T23:06:52.812899+0000 mon.c (mon.2) 93 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:53.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:53 vm06 bash[27746]: audit 2026-03-08T23:06:52.812899+0000 mon.c (mon.2) 93 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:53.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:53 vm11 bash[23232]: cluster 2026-03-08T23:06:51.659851+0000 mgr.y (mgr.24419) 155 : cluster [DBG] pgmap v101: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:53.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:53 vm11 bash[23232]: cluster 2026-03-08T23:06:51.659851+0000 mgr.y (mgr.24419) 155 : cluster [DBG] pgmap v101: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:53.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:53 vm11 bash[23232]: audit 2026-03-08T23:06:52.812899+0000 mon.c (mon.2) 93 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:53.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:53 vm11 bash[23232]: audit 2026-03-08T23:06:52.812899+0000 mon.c (mon.2) 93 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:06:53.994 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.0 2026-03-08T23:06:54.194 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCRAK5pJ/c5KRAAjss2ZgQyWW9KTtmqqwh8bw== 2026-03-08T23:06:54.194 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAN/61p7ZnnJxAAZxJmtKxfFvu+znVbqXFatQ== == AQCRAK5pJ/c5KRAAjss2ZgQyWW9KTtmqqwh8bw== ']' 2026-03-08T23:06:54.194 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:06:54.194 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.1' 2026-03-08T23:06:54.194 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.1 2026-03-08T23:06:54.194 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:06:54.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:54 vm11 bash[23232]: audit 2026-03-08T23:06:52.201980+0000 mgr.y (mgr.24419) 156 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:54.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:54 vm11 bash[23232]: audit 2026-03-08T23:06:52.201980+0000 mgr.y (mgr.24419) 156 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:54.389 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:06:54.389 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:06:54.389 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.1 2026-03-08T23:06:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:54 vm06 bash[20625]: audit 2026-03-08T23:06:52.201980+0000 mgr.y (mgr.24419) 156 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:54 vm06 bash[20625]: audit 2026-03-08T23:06:52.201980+0000 mgr.y (mgr.24419) 156 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:54 vm06 bash[27746]: audit 2026-03-08T23:06:52.201980+0000 mgr.y (mgr.24419) 156 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:54 vm06 bash[27746]: audit 2026-03-08T23:06:52.201980+0000 mgr.y (mgr.24419) 156 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:06:54.572 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.1 on host 'vm06' 2026-03-08T23:06:54.588 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:06:54.588 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: cluster 2026-03-08T23:06:53.660199+0000 mgr.y (mgr.24419) 157 : cluster [DBG] pgmap v102: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: cluster 2026-03-08T23:06:53.660199+0000 mgr.y (mgr.24419) 157 : cluster [DBG] pgmap v102: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.183535+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.106:0/3995754394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.183535+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.106:0/3995754394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.379715+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.106:0/3096689426' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.379715+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.106:0/3096689426' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.553288+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.553288+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.571791+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.571791+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.573767+0000 mon.c (mon.2) 95 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.573767+0000 mon.c (mon.2) 95 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.574854+0000 mon.c (mon.2) 96 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:55.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.574854+0000 mon.c (mon.2) 96 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.575309+0000 mon.c (mon.2) 97 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.575309+0000 mon.c (mon.2) 97 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.581518+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.581518+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.593512+0000 mon.c (mon.2) 98 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.593512+0000 mon.c (mon.2) 98 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.593666+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.593666+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.597339+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]': finished 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:54.597339+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]': finished 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:55.006468+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:55.006468+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:55.013876+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:55 vm06 bash[20625]: audit 2026-03-08T23:06:55.013876+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: cluster 2026-03-08T23:06:53.660199+0000 mgr.y (mgr.24419) 157 : cluster [DBG] pgmap v102: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: cluster 2026-03-08T23:06:53.660199+0000 mgr.y (mgr.24419) 157 : cluster [DBG] pgmap v102: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.183535+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.106:0/3995754394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.183535+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.106:0/3995754394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.379715+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.106:0/3096689426' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.379715+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.106:0/3096689426' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.553288+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.553288+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.571791+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.571791+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.573767+0000 mon.c (mon.2) 95 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.573767+0000 mon.c (mon.2) 95 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.574854+0000 mon.c (mon.2) 96 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.574854+0000 mon.c (mon.2) 96 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.575309+0000 mon.c (mon.2) 97 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.575309+0000 mon.c (mon.2) 97 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.581518+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.581518+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.593512+0000 mon.c (mon.2) 98 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.593512+0000 mon.c (mon.2) 98 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.593666+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.593666+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.597339+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]': finished 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:54.597339+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]': finished 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:55.006468+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:55.006468+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:55.013876+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:55 vm06 bash[27746]: audit 2026-03-08T23:06:55.013876+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: cluster 2026-03-08T23:06:53.660199+0000 mgr.y (mgr.24419) 157 : cluster [DBG] pgmap v102: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: cluster 2026-03-08T23:06:53.660199+0000 mgr.y (mgr.24419) 157 : cluster [DBG] pgmap v102: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.183535+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.106:0/3995754394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.183535+0000 mon.b (mon.1) 41 : audit [INF] from='client.? 192.168.123.106:0/3995754394' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.0"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.379715+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.106:0/3096689426' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.379715+0000 mon.c (mon.2) 94 : audit [INF] from='client.? 192.168.123.106:0/3096689426' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.553288+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.553288+0000 mon.a (mon.0) 828 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.571791+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.571791+0000 mon.a (mon.0) 829 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.573767+0000 mon.c (mon.2) 95 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.573767+0000 mon.c (mon.2) 95 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.574854+0000 mon.c (mon.2) 96 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.574854+0000 mon.c (mon.2) 96 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.575309+0000 mon.c (mon.2) 97 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.575309+0000 mon.c (mon.2) 97 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.581518+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.581518+0000 mon.a (mon.0) 830 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.593512+0000 mon.c (mon.2) 98 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.593512+0000 mon.c (mon.2) 98 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.593666+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.593666+0000 mon.a (mon.0) 831 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]: dispatch 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.597339+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]': finished 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:54.597339+0000 mon.a (mon.0) 832 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.1", "format": "json"}]': finished 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:55.006468+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:55.006468+0000 mon.a (mon.0) 833 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:55.013876+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:55.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:55 vm11 bash[23232]: audit 2026-03-08T23:06:55.013876+0000 mon.a (mon.0) 834 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: audit 2026-03-08T23:06:54.545565+0000 mgr.y (mgr.24419) 158 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.1", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: audit 2026-03-08T23:06:54.545565+0000 mgr.y (mgr.24419) 158 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.1", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: cephadm 2026-03-08T23:06:54.545994+0000 mgr.y (mgr.24419) 159 : cephadm [INF] Schedule rotate-key daemon osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: cephadm 2026-03-08T23:06:54.545994+0000 mgr.y (mgr.24419) 159 : cephadm [INF] Schedule rotate-key daemon osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: cephadm 2026-03-08T23:06:54.593371+0000 mgr.y (mgr.24419) 160 : cephadm [INF] Rotating authentication key for osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: cephadm 2026-03-08T23:06:54.593371+0000 mgr.y (mgr.24419) 160 : cephadm [INF] Rotating authentication key for osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: cephadm 2026-03-08T23:06:54.600962+0000 mgr.y (mgr.24419) 161 : cephadm [INF] Reconfiguring daemon osd.1 on vm06 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: cephadm 2026-03-08T23:06:54.600962+0000 mgr.y (mgr.24419) 161 : cephadm [INF] Reconfiguring daemon osd.1 on vm06 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: audit 2026-03-08T23:06:55.191954+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: audit 2026-03-08T23:06:55.191954+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: audit 2026-03-08T23:06:55.199714+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:56 vm06 bash[20625]: audit 2026-03-08T23:06:55.199714+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: audit 2026-03-08T23:06:54.545565+0000 mgr.y (mgr.24419) 158 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.1", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: audit 2026-03-08T23:06:54.545565+0000 mgr.y (mgr.24419) 158 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.1", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: cephadm 2026-03-08T23:06:54.545994+0000 mgr.y (mgr.24419) 159 : cephadm [INF] Schedule rotate-key daemon osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: cephadm 2026-03-08T23:06:54.545994+0000 mgr.y (mgr.24419) 159 : cephadm [INF] Schedule rotate-key daemon osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: cephadm 2026-03-08T23:06:54.593371+0000 mgr.y (mgr.24419) 160 : cephadm [INF] Rotating authentication key for osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: cephadm 2026-03-08T23:06:54.593371+0000 mgr.y (mgr.24419) 160 : cephadm [INF] Rotating authentication key for osd.1 2026-03-08T23:06:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: cephadm 2026-03-08T23:06:54.600962+0000 mgr.y (mgr.24419) 161 : cephadm [INF] Reconfiguring daemon osd.1 on vm06 2026-03-08T23:06:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: cephadm 2026-03-08T23:06:54.600962+0000 mgr.y (mgr.24419) 161 : cephadm [INF] Reconfiguring daemon osd.1 on vm06 2026-03-08T23:06:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: audit 2026-03-08T23:06:55.191954+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: audit 2026-03-08T23:06:55.191954+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: audit 2026-03-08T23:06:55.199714+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:56 vm06 bash[27746]: audit 2026-03-08T23:06:55.199714+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: audit 2026-03-08T23:06:54.545565+0000 mgr.y (mgr.24419) 158 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.1", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: audit 2026-03-08T23:06:54.545565+0000 mgr.y (mgr.24419) 158 : audit [DBG] from='client.14811 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.1", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: cephadm 2026-03-08T23:06:54.545994+0000 mgr.y (mgr.24419) 159 : cephadm [INF] Schedule rotate-key daemon osd.1 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: cephadm 2026-03-08T23:06:54.545994+0000 mgr.y (mgr.24419) 159 : cephadm [INF] Schedule rotate-key daemon osd.1 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: cephadm 2026-03-08T23:06:54.593371+0000 mgr.y (mgr.24419) 160 : cephadm [INF] Rotating authentication key for osd.1 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: cephadm 2026-03-08T23:06:54.593371+0000 mgr.y (mgr.24419) 160 : cephadm [INF] Rotating authentication key for osd.1 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: cephadm 2026-03-08T23:06:54.600962+0000 mgr.y (mgr.24419) 161 : cephadm [INF] Reconfiguring daemon osd.1 on vm06 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: cephadm 2026-03-08T23:06:54.600962+0000 mgr.y (mgr.24419) 161 : cephadm [INF] Reconfiguring daemon osd.1 on vm06 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: audit 2026-03-08T23:06:55.191954+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: audit 2026-03-08T23:06:55.191954+0000 mon.a (mon.0) 835 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: audit 2026-03-08T23:06:55.199714+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:56 vm11 bash[23232]: audit 2026-03-08T23:06:55.199714+0000 mon.a (mon.0) 836 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:06:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:57 vm06 bash[20625]: cluster 2026-03-08T23:06:55.660852+0000 mgr.y (mgr.24419) 162 : cluster [DBG] pgmap v103: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:57 vm06 bash[20625]: cluster 2026-03-08T23:06:55.660852+0000 mgr.y (mgr.24419) 162 : cluster [DBG] pgmap v103: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:57.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:57 vm06 bash[27746]: cluster 2026-03-08T23:06:55.660852+0000 mgr.y (mgr.24419) 162 : cluster [DBG] pgmap v103: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:57.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:57 vm06 bash[27746]: cluster 2026-03-08T23:06:55.660852+0000 mgr.y (mgr.24419) 162 : cluster [DBG] pgmap v103: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:57 vm11 bash[23232]: cluster 2026-03-08T23:06:55.660852+0000 mgr.y (mgr.24419) 162 : cluster [DBG] pgmap v103: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:57.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:57 vm11 bash[23232]: cluster 2026-03-08T23:06:55.660852+0000 mgr.y (mgr.24419) 162 : cluster [DBG] pgmap v103: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:06:58.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:58 vm06 bash[20625]: cluster 2026-03-08T23:06:57.661252+0000 mgr.y (mgr.24419) 163 : cluster [DBG] pgmap v104: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:58.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:06:58 vm06 bash[20625]: cluster 2026-03-08T23:06:57.661252+0000 mgr.y (mgr.24419) 163 : cluster [DBG] pgmap v104: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:58.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:58 vm06 bash[27746]: cluster 2026-03-08T23:06:57.661252+0000 mgr.y (mgr.24419) 163 : cluster [DBG] pgmap v104: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:58.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:06:58 vm06 bash[27746]: cluster 2026-03-08T23:06:57.661252+0000 mgr.y (mgr.24419) 163 : cluster [DBG] pgmap v104: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:58.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:58 vm11 bash[23232]: cluster 2026-03-08T23:06:57.661252+0000 mgr.y (mgr.24419) 163 : cluster [DBG] pgmap v104: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:58.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:06:58 vm11 bash[23232]: cluster 2026-03-08T23:06:57.661252+0000 mgr.y (mgr.24419) 163 : cluster [DBG] pgmap v104: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:06:59.590 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:06:59.790 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:06:59.790 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:06:59.790 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:00 vm06 bash[20625]: cluster 2026-03-08T23:06:59.661560+0000 mgr.y (mgr.24419) 164 : cluster [DBG] pgmap v105: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:00 vm06 bash[20625]: cluster 2026-03-08T23:06:59.661560+0000 mgr.y (mgr.24419) 164 : cluster [DBG] pgmap v105: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:00 vm06 bash[20625]: audit 2026-03-08T23:06:59.780641+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.106:0/2042782895' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:00 vm06 bash[20625]: audit 2026-03-08T23:06:59.780641+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.106:0/2042782895' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:07:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:07:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:00 vm06 bash[27746]: cluster 2026-03-08T23:06:59.661560+0000 mgr.y (mgr.24419) 164 : cluster [DBG] pgmap v105: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:00 vm06 bash[27746]: cluster 2026-03-08T23:06:59.661560+0000 mgr.y (mgr.24419) 164 : cluster [DBG] pgmap v105: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:00 vm06 bash[27746]: audit 2026-03-08T23:06:59.780641+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.106:0/2042782895' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:00 vm06 bash[27746]: audit 2026-03-08T23:06:59.780641+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.106:0/2042782895' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:00 vm11 bash[23232]: cluster 2026-03-08T23:06:59.661560+0000 mgr.y (mgr.24419) 164 : cluster [DBG] pgmap v105: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:00 vm11 bash[23232]: cluster 2026-03-08T23:06:59.661560+0000 mgr.y (mgr.24419) 164 : cluster [DBG] pgmap v105: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:01.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:00 vm11 bash[23232]: audit 2026-03-08T23:06:59.780641+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.106:0/2042782895' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:01.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:00 vm11 bash[23232]: audit 2026-03-08T23:06:59.780641+0000 mon.c (mon.2) 99 : audit [INF] from='client.? 192.168.123.106:0/2042782895' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:02.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:07:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:07:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:02 vm06 bash[20625]: cluster 2026-03-08T23:07:01.661968+0000 mgr.y (mgr.24419) 165 : cluster [DBG] pgmap v106: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:02 vm06 bash[20625]: cluster 2026-03-08T23:07:01.661968+0000 mgr.y (mgr.24419) 165 : cluster [DBG] pgmap v106: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:02 vm06 bash[27746]: cluster 2026-03-08T23:07:01.661968+0000 mgr.y (mgr.24419) 165 : cluster [DBG] pgmap v106: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:02 vm06 bash[27746]: cluster 2026-03-08T23:07:01.661968+0000 mgr.y (mgr.24419) 165 : cluster [DBG] pgmap v106: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:03.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:02 vm11 bash[23232]: cluster 2026-03-08T23:07:01.661968+0000 mgr.y (mgr.24419) 165 : cluster [DBG] pgmap v106: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:02 vm11 bash[23232]: cluster 2026-03-08T23:07:01.661968+0000 mgr.y (mgr.24419) 165 : cluster [DBG] pgmap v106: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:04.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:03 vm06 bash[20625]: audit 2026-03-08T23:07:02.205659+0000 mgr.y (mgr.24419) 166 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:04.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:03 vm06 bash[20625]: audit 2026-03-08T23:07:02.205659+0000 mgr.y (mgr.24419) 166 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:04.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:03 vm06 bash[27746]: audit 2026-03-08T23:07:02.205659+0000 mgr.y (mgr.24419) 166 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:04.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:03 vm06 bash[27746]: audit 2026-03-08T23:07:02.205659+0000 mgr.y (mgr.24419) 166 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:03 vm11 bash[23232]: audit 2026-03-08T23:07:02.205659+0000 mgr.y (mgr.24419) 166 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:03 vm11 bash[23232]: audit 2026-03-08T23:07:02.205659+0000 mgr.y (mgr.24419) 166 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:04.796 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:07:04.983 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:07:04.983 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:07:04.983 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:05.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:04 vm06 bash[20625]: cluster 2026-03-08T23:07:03.662238+0000 mgr.y (mgr.24419) 167 : cluster [DBG] pgmap v107: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:05.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:04 vm06 bash[20625]: cluster 2026-03-08T23:07:03.662238+0000 mgr.y (mgr.24419) 167 : cluster [DBG] pgmap v107: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:05.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:04 vm06 bash[27746]: cluster 2026-03-08T23:07:03.662238+0000 mgr.y (mgr.24419) 167 : cluster [DBG] pgmap v107: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:05.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:04 vm06 bash[27746]: cluster 2026-03-08T23:07:03.662238+0000 mgr.y (mgr.24419) 167 : cluster [DBG] pgmap v107: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:05.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:04 vm11 bash[23232]: cluster 2026-03-08T23:07:03.662238+0000 mgr.y (mgr.24419) 167 : cluster [DBG] pgmap v107: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:05.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:04 vm11 bash[23232]: cluster 2026-03-08T23:07:03.662238+0000 mgr.y (mgr.24419) 167 : cluster [DBG] pgmap v107: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:05 vm06 bash[20625]: audit 2026-03-08T23:07:04.974664+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.106:0/3248948327' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:05 vm06 bash[20625]: audit 2026-03-08T23:07:04.974664+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.106:0/3248948327' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:05 vm06 bash[27746]: audit 2026-03-08T23:07:04.974664+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.106:0/3248948327' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:05 vm06 bash[27746]: audit 2026-03-08T23:07:04.974664+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.106:0/3248948327' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:06.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:05 vm11 bash[23232]: audit 2026-03-08T23:07:04.974664+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.106:0/3248948327' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:06.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:05 vm11 bash[23232]: audit 2026-03-08T23:07:04.974664+0000 mon.a (mon.0) 837 : audit [INF] from='client.? 192.168.123.106:0/3248948327' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:06 vm06 bash[20625]: cluster 2026-03-08T23:07:05.662770+0000 mgr.y (mgr.24419) 168 : cluster [DBG] pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:06 vm06 bash[20625]: cluster 2026-03-08T23:07:05.662770+0000 mgr.y (mgr.24419) 168 : cluster [DBG] pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:06 vm06 bash[27746]: cluster 2026-03-08T23:07:05.662770+0000 mgr.y (mgr.24419) 168 : cluster [DBG] pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:06 vm06 bash[27746]: cluster 2026-03-08T23:07:05.662770+0000 mgr.y (mgr.24419) 168 : cluster [DBG] pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:07.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:06 vm11 bash[23232]: cluster 2026-03-08T23:07:05.662770+0000 mgr.y (mgr.24419) 168 : cluster [DBG] pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:06 vm11 bash[23232]: cluster 2026-03-08T23:07:05.662770+0000 mgr.y (mgr.24419) 168 : cluster [DBG] pgmap v108: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:08 vm06 bash[20625]: cluster 2026-03-08T23:07:07.663037+0000 mgr.y (mgr.24419) 169 : cluster [DBG] pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:08 vm06 bash[20625]: cluster 2026-03-08T23:07:07.663037+0000 mgr.y (mgr.24419) 169 : cluster [DBG] pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:08 vm06 bash[20625]: audit 2026-03-08T23:07:07.819391+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:08 vm06 bash[20625]: audit 2026-03-08T23:07:07.819391+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:08 vm06 bash[27746]: cluster 2026-03-08T23:07:07.663037+0000 mgr.y (mgr.24419) 169 : cluster [DBG] pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:08 vm06 bash[27746]: cluster 2026-03-08T23:07:07.663037+0000 mgr.y (mgr.24419) 169 : cluster [DBG] pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:08 vm06 bash[27746]: audit 2026-03-08T23:07:07.819391+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:09.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:08 vm06 bash[27746]: audit 2026-03-08T23:07:07.819391+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:09.057 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:08 vm11 bash[23232]: cluster 2026-03-08T23:07:07.663037+0000 mgr.y (mgr.24419) 169 : cluster [DBG] pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:09.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:08 vm11 bash[23232]: cluster 2026-03-08T23:07:07.663037+0000 mgr.y (mgr.24419) 169 : cluster [DBG] pgmap v109: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:09.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:08 vm11 bash[23232]: audit 2026-03-08T23:07:07.819391+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:09.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:08 vm11 bash[23232]: audit 2026-03-08T23:07:07.819391+0000 mon.c (mon.2) 100 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:09.984 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:07:10.190 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:07:10.191 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:07:10.191 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:10 vm06 bash[20625]: cluster 2026-03-08T23:07:09.663299+0000 mgr.y (mgr.24419) 170 : cluster [DBG] pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:10 vm06 bash[20625]: cluster 2026-03-08T23:07:09.663299+0000 mgr.y (mgr.24419) 170 : cluster [DBG] pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:10 vm06 bash[20625]: audit 2026-03-08T23:07:10.180629+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.106:0/1716705015' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:10 vm06 bash[20625]: audit 2026-03-08T23:07:10.180629+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.106:0/1716705015' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:07:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:07:10] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:10 vm06 bash[27746]: cluster 2026-03-08T23:07:09.663299+0000 mgr.y (mgr.24419) 170 : cluster [DBG] pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:10 vm06 bash[27746]: cluster 2026-03-08T23:07:09.663299+0000 mgr.y (mgr.24419) 170 : cluster [DBG] pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:10 vm06 bash[27746]: audit 2026-03-08T23:07:10.180629+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.106:0/1716705015' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:10 vm06 bash[27746]: audit 2026-03-08T23:07:10.180629+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.106:0/1716705015' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:11.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:10 vm11 bash[23232]: cluster 2026-03-08T23:07:09.663299+0000 mgr.y (mgr.24419) 170 : cluster [DBG] pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:11.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:10 vm11 bash[23232]: cluster 2026-03-08T23:07:09.663299+0000 mgr.y (mgr.24419) 170 : cluster [DBG] pgmap v110: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:11.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:10 vm11 bash[23232]: audit 2026-03-08T23:07:10.180629+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.106:0/1716705015' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:11.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:10 vm11 bash[23232]: audit 2026-03-08T23:07:10.180629+0000 mon.c (mon.2) 101 : audit [INF] from='client.? 192.168.123.106:0/1716705015' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:12.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:07:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:07:13.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:12 vm11 bash[23232]: cluster 2026-03-08T23:07:11.663880+0000 mgr.y (mgr.24419) 171 : cluster [DBG] pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:13.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:12 vm11 bash[23232]: cluster 2026-03-08T23:07:11.663880+0000 mgr.y (mgr.24419) 171 : cluster [DBG] pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:12 vm06 bash[20625]: cluster 2026-03-08T23:07:11.663880+0000 mgr.y (mgr.24419) 171 : cluster [DBG] pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:12 vm06 bash[20625]: cluster 2026-03-08T23:07:11.663880+0000 mgr.y (mgr.24419) 171 : cluster [DBG] pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:12 vm06 bash[27746]: cluster 2026-03-08T23:07:11.663880+0000 mgr.y (mgr.24419) 171 : cluster [DBG] pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:12 vm06 bash[27746]: cluster 2026-03-08T23:07:11.663880+0000 mgr.y (mgr.24419) 171 : cluster [DBG] pgmap v111: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:13 vm11 bash[23232]: audit 2026-03-08T23:07:12.213037+0000 mgr.y (mgr.24419) 172 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:14.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:13 vm11 bash[23232]: audit 2026-03-08T23:07:12.213037+0000 mgr.y (mgr.24419) 172 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:13 vm06 bash[20625]: audit 2026-03-08T23:07:12.213037+0000 mgr.y (mgr.24419) 172 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:13 vm06 bash[20625]: audit 2026-03-08T23:07:12.213037+0000 mgr.y (mgr.24419) 172 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:13 vm06 bash[27746]: audit 2026-03-08T23:07:12.213037+0000 mgr.y (mgr.24419) 172 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:13 vm06 bash[27746]: audit 2026-03-08T23:07:12.213037+0000 mgr.y (mgr.24419) 172 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:15.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:14 vm11 bash[23232]: cluster 2026-03-08T23:07:13.664145+0000 mgr.y (mgr.24419) 173 : cluster [DBG] pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:15.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:14 vm11 bash[23232]: cluster 2026-03-08T23:07:13.664145+0000 mgr.y (mgr.24419) 173 : cluster [DBG] pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:15.192 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:07:15.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:14 vm06 bash[20625]: cluster 2026-03-08T23:07:13.664145+0000 mgr.y (mgr.24419) 173 : cluster [DBG] pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:15.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:14 vm06 bash[20625]: cluster 2026-03-08T23:07:13.664145+0000 mgr.y (mgr.24419) 173 : cluster [DBG] pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:15.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:14 vm06 bash[27746]: cluster 2026-03-08T23:07:13.664145+0000 mgr.y (mgr.24419) 173 : cluster [DBG] pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:15.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:14 vm06 bash[27746]: cluster 2026-03-08T23:07:13.664145+0000 mgr.y (mgr.24419) 173 : cluster [DBG] pgmap v112: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:15.389 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:07:15.389 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:07:15.389 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:16.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:15 vm06 bash[20625]: audit 2026-03-08T23:07:15.379933+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.106:0/3413453291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:16.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:15 vm06 bash[20625]: audit 2026-03-08T23:07:15.379933+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.106:0/3413453291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:16.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:15 vm06 bash[27746]: audit 2026-03-08T23:07:15.379933+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.106:0/3413453291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:16.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:15 vm06 bash[27746]: audit 2026-03-08T23:07:15.379933+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.106:0/3413453291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:16.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:15 vm11 bash[23232]: audit 2026-03-08T23:07:15.379933+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.106:0/3413453291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:16.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:15 vm11 bash[23232]: audit 2026-03-08T23:07:15.379933+0000 mon.a (mon.0) 838 : audit [INF] from='client.? 192.168.123.106:0/3413453291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:17.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:16 vm06 bash[20625]: cluster 2026-03-08T23:07:15.664663+0000 mgr.y (mgr.24419) 174 : cluster [DBG] pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:17.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:16 vm06 bash[20625]: cluster 2026-03-08T23:07:15.664663+0000 mgr.y (mgr.24419) 174 : cluster [DBG] pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:17.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:16 vm06 bash[27746]: cluster 2026-03-08T23:07:15.664663+0000 mgr.y (mgr.24419) 174 : cluster [DBG] pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:17.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:16 vm06 bash[27746]: cluster 2026-03-08T23:07:15.664663+0000 mgr.y (mgr.24419) 174 : cluster [DBG] pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:17.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:16 vm11 bash[23232]: cluster 2026-03-08T23:07:15.664663+0000 mgr.y (mgr.24419) 174 : cluster [DBG] pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:17.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:16 vm11 bash[23232]: cluster 2026-03-08T23:07:15.664663+0000 mgr.y (mgr.24419) 174 : cluster [DBG] pgmap v113: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:18 vm06 bash[20625]: cluster 2026-03-08T23:07:17.664921+0000 mgr.y (mgr.24419) 175 : cluster [DBG] pgmap v114: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:18 vm06 bash[20625]: cluster 2026-03-08T23:07:17.664921+0000 mgr.y (mgr.24419) 175 : cluster [DBG] pgmap v114: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:18 vm06 bash[27746]: cluster 2026-03-08T23:07:17.664921+0000 mgr.y (mgr.24419) 175 : cluster [DBG] pgmap v114: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:18 vm06 bash[27746]: cluster 2026-03-08T23:07:17.664921+0000 mgr.y (mgr.24419) 175 : cluster [DBG] pgmap v114: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:19.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:18 vm11 bash[23232]: cluster 2026-03-08T23:07:17.664921+0000 mgr.y (mgr.24419) 175 : cluster [DBG] pgmap v114: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:19.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:18 vm11 bash[23232]: cluster 2026-03-08T23:07:17.664921+0000 mgr.y (mgr.24419) 175 : cluster [DBG] pgmap v114: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:20.390 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:07:20.585 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:07:20.585 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:07:20.585 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:20 vm06 bash[20625]: cluster 2026-03-08T23:07:19.665154+0000 mgr.y (mgr.24419) 176 : cluster [DBG] pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:20 vm06 bash[20625]: cluster 2026-03-08T23:07:19.665154+0000 mgr.y (mgr.24419) 176 : cluster [DBG] pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:20 vm06 bash[20625]: audit 2026-03-08T23:07:20.574951+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.106:0/4016073405' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:20 vm06 bash[20625]: audit 2026-03-08T23:07:20.574951+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.106:0/4016073405' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:07:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:07:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:20 vm06 bash[27746]: cluster 2026-03-08T23:07:19.665154+0000 mgr.y (mgr.24419) 176 : cluster [DBG] pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:20 vm06 bash[27746]: cluster 2026-03-08T23:07:19.665154+0000 mgr.y (mgr.24419) 176 : cluster [DBG] pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:20 vm06 bash[27746]: audit 2026-03-08T23:07:20.574951+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.106:0/4016073405' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:20 vm06 bash[27746]: audit 2026-03-08T23:07:20.574951+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.106:0/4016073405' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:20 vm11 bash[23232]: cluster 2026-03-08T23:07:19.665154+0000 mgr.y (mgr.24419) 176 : cluster [DBG] pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:20 vm11 bash[23232]: cluster 2026-03-08T23:07:19.665154+0000 mgr.y (mgr.24419) 176 : cluster [DBG] pgmap v115: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:20 vm11 bash[23232]: audit 2026-03-08T23:07:20.574951+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.106:0/4016073405' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:21.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:20 vm11 bash[23232]: audit 2026-03-08T23:07:20.574951+0000 mon.b (mon.1) 42 : audit [INF] from='client.? 192.168.123.106:0/4016073405' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:22.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:07:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:22 vm06 bash[20625]: cluster 2026-03-08T23:07:21.665536+0000 mgr.y (mgr.24419) 177 : cluster [DBG] pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:22 vm06 bash[20625]: cluster 2026-03-08T23:07:21.665536+0000 mgr.y (mgr.24419) 177 : cluster [DBG] pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:22 vm06 bash[20625]: audit 2026-03-08T23:07:22.828424+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:22 vm06 bash[20625]: audit 2026-03-08T23:07:22.828424+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:22 vm06 bash[27746]: cluster 2026-03-08T23:07:21.665536+0000 mgr.y (mgr.24419) 177 : cluster [DBG] pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:22 vm06 bash[27746]: cluster 2026-03-08T23:07:21.665536+0000 mgr.y (mgr.24419) 177 : cluster [DBG] pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:22 vm06 bash[27746]: audit 2026-03-08T23:07:22.828424+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:22 vm06 bash[27746]: audit 2026-03-08T23:07:22.828424+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:22 vm11 bash[23232]: cluster 2026-03-08T23:07:21.665536+0000 mgr.y (mgr.24419) 177 : cluster [DBG] pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:23.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:22 vm11 bash[23232]: cluster 2026-03-08T23:07:21.665536+0000 mgr.y (mgr.24419) 177 : cluster [DBG] pgmap v116: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:23.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:22 vm11 bash[23232]: audit 2026-03-08T23:07:22.828424+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:23.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:22 vm11 bash[23232]: audit 2026-03-08T23:07:22.828424+0000 mon.c (mon.2) 102 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:23 vm06 bash[20625]: audit 2026-03-08T23:07:22.221980+0000 mgr.y (mgr.24419) 178 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:23 vm06 bash[20625]: audit 2026-03-08T23:07:22.221980+0000 mgr.y (mgr.24419) 178 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:23 vm06 bash[27746]: audit 2026-03-08T23:07:22.221980+0000 mgr.y (mgr.24419) 178 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:23 vm06 bash[27746]: audit 2026-03-08T23:07:22.221980+0000 mgr.y (mgr.24419) 178 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:24.307 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:23 vm11 bash[23232]: audit 2026-03-08T23:07:22.221980+0000 mgr.y (mgr.24419) 178 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:24.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:23 vm11 bash[23232]: audit 2026-03-08T23:07:22.221980+0000 mgr.y (mgr.24419) 178 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:24 vm06 bash[20625]: cluster 2026-03-08T23:07:23.665797+0000 mgr.y (mgr.24419) 179 : cluster [DBG] pgmap v117: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:24 vm06 bash[20625]: cluster 2026-03-08T23:07:23.665797+0000 mgr.y (mgr.24419) 179 : cluster [DBG] pgmap v117: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:24 vm06 bash[27746]: cluster 2026-03-08T23:07:23.665797+0000 mgr.y (mgr.24419) 179 : cluster [DBG] pgmap v117: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:24 vm06 bash[27746]: cluster 2026-03-08T23:07:23.665797+0000 mgr.y (mgr.24419) 179 : cluster [DBG] pgmap v117: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:25.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:24 vm11 bash[23232]: cluster 2026-03-08T23:07:23.665797+0000 mgr.y (mgr.24419) 179 : cluster [DBG] pgmap v117: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:25.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:24 vm11 bash[23232]: cluster 2026-03-08T23:07:23.665797+0000 mgr.y (mgr.24419) 179 : cluster [DBG] pgmap v117: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:25.586 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:07:25.780 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== 2026-03-08T23:07:25.781 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== ']' 2026-03-08T23:07:25.781 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:26.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:25 vm06 bash[20625]: audit 2026-03-08T23:07:25.771565+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.106:0/952099342' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:26.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:25 vm06 bash[20625]: audit 2026-03-08T23:07:25.771565+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.106:0/952099342' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:26.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:25 vm06 bash[27746]: audit 2026-03-08T23:07:25.771565+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.106:0/952099342' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:26.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:25 vm06 bash[27746]: audit 2026-03-08T23:07:25.771565+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.106:0/952099342' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:26.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:25 vm11 bash[23232]: audit 2026-03-08T23:07:25.771565+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.106:0/952099342' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:26.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:25 vm11 bash[23232]: audit 2026-03-08T23:07:25.771565+0000 mon.c (mon.2) 103 : audit [INF] from='client.? 192.168.123.106:0/952099342' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:27.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:27 vm06 bash[20625]: cluster 2026-03-08T23:07:25.666289+0000 mgr.y (mgr.24419) 180 : cluster [DBG] pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:27.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:27 vm06 bash[20625]: cluster 2026-03-08T23:07:25.666289+0000 mgr.y (mgr.24419) 180 : cluster [DBG] pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:27.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:27 vm06 bash[27746]: cluster 2026-03-08T23:07:25.666289+0000 mgr.y (mgr.24419) 180 : cluster [DBG] pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:27.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:27 vm06 bash[27746]: cluster 2026-03-08T23:07:25.666289+0000 mgr.y (mgr.24419) 180 : cluster [DBG] pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:27.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:27 vm11 bash[23232]: cluster 2026-03-08T23:07:25.666289+0000 mgr.y (mgr.24419) 180 : cluster [DBG] pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:27.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:27 vm11 bash[23232]: cluster 2026-03-08T23:07:25.666289+0000 mgr.y (mgr.24419) 180 : cluster [DBG] pgmap v118: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:28.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:28 vm06 bash[20625]: cluster 2026-03-08T23:07:27.666575+0000 mgr.y (mgr.24419) 181 : cluster [DBG] pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:28.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:28 vm06 bash[20625]: cluster 2026-03-08T23:07:27.666575+0000 mgr.y (mgr.24419) 181 : cluster [DBG] pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:28.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:28 vm06 bash[27746]: cluster 2026-03-08T23:07:27.666575+0000 mgr.y (mgr.24419) 181 : cluster [DBG] pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:28.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:28 vm06 bash[27746]: cluster 2026-03-08T23:07:27.666575+0000 mgr.y (mgr.24419) 181 : cluster [DBG] pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:28.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:28 vm11 bash[23232]: cluster 2026-03-08T23:07:27.666575+0000 mgr.y (mgr.24419) 181 : cluster [DBG] pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:28.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:28 vm11 bash[23232]: cluster 2026-03-08T23:07:27.666575+0000 mgr.y (mgr.24419) 181 : cluster [DBG] pgmap v119: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:30.782 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.1 2026-03-08T23:07:30.965 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.2 2026-03-08T23:07:30.966 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAOAa5pWRNjIxAAMRZWousE90qXZoePN2jMCw== 2026-03-08T23:07:30.966 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQAv/61psbMHBxAACRFUB2WJaJkimTzndOlZ1A== == AQAOAa5pWRNjIxAAMRZWousE90qXZoePN2jMCw== ']' 2026-03-08T23:07:30.966 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:07:30.966 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.2' 2026-03-08T23:07:30.966 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:07:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:30 vm06 bash[20625]: cluster 2026-03-08T23:07:29.666870+0000 mgr.y (mgr.24419) 182 : cluster [DBG] pgmap v120: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:30 vm06 bash[20625]: cluster 2026-03-08T23:07:29.666870+0000 mgr.y (mgr.24419) 182 : cluster [DBG] pgmap v120: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:31.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:07:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:07:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:07:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:30 vm06 bash[27746]: cluster 2026-03-08T23:07:29.666870+0000 mgr.y (mgr.24419) 182 : cluster [DBG] pgmap v120: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:30 vm06 bash[27746]: cluster 2026-03-08T23:07:29.666870+0000 mgr.y (mgr.24419) 182 : cluster [DBG] pgmap v120: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:31.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:30 vm11 bash[23232]: cluster 2026-03-08T23:07:29.666870+0000 mgr.y (mgr.24419) 182 : cluster [DBG] pgmap v120: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:31.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:30 vm11 bash[23232]: cluster 2026-03-08T23:07:29.666870+0000 mgr.y (mgr.24419) 182 : cluster [DBG] pgmap v120: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:31.147 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:31.147 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:31.147 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.2 2026-03-08T23:07:31.314 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.2 on host 'vm06' 2026-03-08T23:07:31.338 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== ']' 2026-03-08T23:07:31.338 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:31.998 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:30.957066+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.106:0/4203771883' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:30.957066+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.106:0/4203771883' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.137709+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.106:0/427798898' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.137709+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.106:0/427798898' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.305230+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.305230+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.311416+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.311416+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.312532+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.312532+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.591855+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.591855+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.599112+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.599112+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.603777+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.603777+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.609336+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.609336+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.611198+0000 mon.c (mon.2) 106 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.611198+0000 mon.c (mon.2) 106 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.611885+0000 mon.c (mon.2) 107 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.611885+0000 mon.c (mon.2) 107 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.615938+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.615938+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.626708+0000 mon.c (mon.2) 108 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.626708+0000 mon.c (mon.2) 108 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.627016+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.627016+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.629034+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]': finished 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:31 vm06 bash[20625]: audit 2026-03-08T23:07:31.629034+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]': finished 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:30.957066+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.106:0/4203771883' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:30.957066+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.106:0/4203771883' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.137709+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.106:0/427798898' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.137709+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.106:0/427798898' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.305230+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.305230+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.311416+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.311416+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:31.999 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.312532+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.312532+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.591855+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.591855+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.599112+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.599112+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.603777+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.603777+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.609336+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.609336+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.611198+0000 mon.c (mon.2) 106 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.611198+0000 mon.c (mon.2) 106 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.611885+0000 mon.c (mon.2) 107 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.611885+0000 mon.c (mon.2) 107 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.615938+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.615938+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.626708+0000 mon.c (mon.2) 108 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.626708+0000 mon.c (mon.2) 108 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.627016+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.627016+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.629034+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]': finished 2026-03-08T23:07:32.000 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:31 vm06 bash[27746]: audit 2026-03-08T23:07:31.629034+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]': finished 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:30.957066+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.106:0/4203771883' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:30.957066+0000 mon.c (mon.2) 104 : audit [INF] from='client.? 192.168.123.106:0/4203771883' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.1"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.137709+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.106:0/427798898' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.137709+0000 mon.a (mon.0) 839 : audit [INF] from='client.? 192.168.123.106:0/427798898' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.305230+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.305230+0000 mon.a (mon.0) 840 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.311416+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.311416+0000 mon.a (mon.0) 841 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.312532+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.312532+0000 mon.c (mon.2) 105 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.591855+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.591855+0000 mon.a (mon.0) 842 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.599112+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.599112+0000 mon.a (mon.0) 843 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.603777+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.603777+0000 mon.a (mon.0) 844 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.609336+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.609336+0000 mon.a (mon.0) 845 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.611198+0000 mon.c (mon.2) 106 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.611198+0000 mon.c (mon.2) 106 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.611885+0000 mon.c (mon.2) 107 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.611885+0000 mon.c (mon.2) 107 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.615938+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.615938+0000 mon.a (mon.0) 846 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.626708+0000 mon.c (mon.2) 108 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.626708+0000 mon.c (mon.2) 108 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.627016+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.059 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.627016+0000 mon.a (mon.0) 847 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]: dispatch 2026-03-08T23:07:32.059 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.629034+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]': finished 2026-03-08T23:07:32.059 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:31 vm11 bash[23232]: audit 2026-03-08T23:07:31.629034+0000 mon.a (mon.0) 848 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.2", "format": "json"}]': finished 2026-03-08T23:07:32.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:07:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:31.298616+0000 mgr.y (mgr.24419) 183 : audit [DBG] from='client.24764 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.2", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:31.298616+0000 mgr.y (mgr.24419) 183 : audit [DBG] from='client.24764 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.2", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cephadm 2026-03-08T23:07:31.299071+0000 mgr.y (mgr.24419) 184 : cephadm [INF] Schedule rotate-key daemon osd.2 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cephadm 2026-03-08T23:07:31.299071+0000 mgr.y (mgr.24419) 184 : cephadm [INF] Schedule rotate-key daemon osd.2 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cephadm 2026-03-08T23:07:31.626367+0000 mgr.y (mgr.24419) 185 : cephadm [INF] Rotating authentication key for osd.2 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cephadm 2026-03-08T23:07:31.626367+0000 mgr.y (mgr.24419) 185 : cephadm [INF] Rotating authentication key for osd.2 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cephadm 2026-03-08T23:07:31.633297+0000 mgr.y (mgr.24419) 186 : cephadm [INF] Reconfiguring daemon osd.2 on vm06 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cephadm 2026-03-08T23:07:31.633297+0000 mgr.y (mgr.24419) 186 : cephadm [INF] Reconfiguring daemon osd.2 on vm06 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cluster 2026-03-08T23:07:31.667320+0000 mgr.y (mgr.24419) 187 : cluster [DBG] pgmap v121: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:33.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: cluster 2026-03-08T23:07:31.667320+0000 mgr.y (mgr.24419) 187 : cluster [DBG] pgmap v121: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.095783+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.095783+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.115823+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.115823+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.317239+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.317239+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.333800+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:33 vm06 bash[20625]: audit 2026-03-08T23:07:32.333800+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:31.298616+0000 mgr.y (mgr.24419) 183 : audit [DBG] from='client.24764 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.2", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:31.298616+0000 mgr.y (mgr.24419) 183 : audit [DBG] from='client.24764 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.2", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cephadm 2026-03-08T23:07:31.299071+0000 mgr.y (mgr.24419) 184 : cephadm [INF] Schedule rotate-key daemon osd.2 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cephadm 2026-03-08T23:07:31.299071+0000 mgr.y (mgr.24419) 184 : cephadm [INF] Schedule rotate-key daemon osd.2 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cephadm 2026-03-08T23:07:31.626367+0000 mgr.y (mgr.24419) 185 : cephadm [INF] Rotating authentication key for osd.2 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cephadm 2026-03-08T23:07:31.626367+0000 mgr.y (mgr.24419) 185 : cephadm [INF] Rotating authentication key for osd.2 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cephadm 2026-03-08T23:07:31.633297+0000 mgr.y (mgr.24419) 186 : cephadm [INF] Reconfiguring daemon osd.2 on vm06 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cephadm 2026-03-08T23:07:31.633297+0000 mgr.y (mgr.24419) 186 : cephadm [INF] Reconfiguring daemon osd.2 on vm06 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cluster 2026-03-08T23:07:31.667320+0000 mgr.y (mgr.24419) 187 : cluster [DBG] pgmap v121: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: cluster 2026-03-08T23:07:31.667320+0000 mgr.y (mgr.24419) 187 : cluster [DBG] pgmap v121: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.095783+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.095783+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.115823+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.115823+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.317239+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.317239+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.333800+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:33 vm06 bash[27746]: audit 2026-03-08T23:07:32.333800+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:31.298616+0000 mgr.y (mgr.24419) 183 : audit [DBG] from='client.24764 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.2", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:31.298616+0000 mgr.y (mgr.24419) 183 : audit [DBG] from='client.24764 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.2", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cephadm 2026-03-08T23:07:31.299071+0000 mgr.y (mgr.24419) 184 : cephadm [INF] Schedule rotate-key daemon osd.2 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cephadm 2026-03-08T23:07:31.299071+0000 mgr.y (mgr.24419) 184 : cephadm [INF] Schedule rotate-key daemon osd.2 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cephadm 2026-03-08T23:07:31.626367+0000 mgr.y (mgr.24419) 185 : cephadm [INF] Rotating authentication key for osd.2 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cephadm 2026-03-08T23:07:31.626367+0000 mgr.y (mgr.24419) 185 : cephadm [INF] Rotating authentication key for osd.2 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cephadm 2026-03-08T23:07:31.633297+0000 mgr.y (mgr.24419) 186 : cephadm [INF] Reconfiguring daemon osd.2 on vm06 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cephadm 2026-03-08T23:07:31.633297+0000 mgr.y (mgr.24419) 186 : cephadm [INF] Reconfiguring daemon osd.2 on vm06 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cluster 2026-03-08T23:07:31.667320+0000 mgr.y (mgr.24419) 187 : cluster [DBG] pgmap v121: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: cluster 2026-03-08T23:07:31.667320+0000 mgr.y (mgr.24419) 187 : cluster [DBG] pgmap v121: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.095783+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.095783+0000 mon.a (mon.0) 849 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.115823+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.115823+0000 mon.a (mon.0) 850 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.317239+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.317239+0000 mon.a (mon.0) 851 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.333800+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:33.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:33 vm11 bash[23232]: audit 2026-03-08T23:07:32.333800+0000 mon.a (mon.0) 852 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:07:34.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:34 vm06 bash[20625]: audit 2026-03-08T23:07:32.232684+0000 mgr.y (mgr.24419) 188 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:34.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:34 vm06 bash[20625]: audit 2026-03-08T23:07:32.232684+0000 mgr.y (mgr.24419) 188 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:34.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:34 vm06 bash[27746]: audit 2026-03-08T23:07:32.232684+0000 mgr.y (mgr.24419) 188 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:34.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:34 vm06 bash[27746]: audit 2026-03-08T23:07:32.232684+0000 mgr.y (mgr.24419) 188 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:34.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:34 vm11 bash[23232]: audit 2026-03-08T23:07:32.232684+0000 mgr.y (mgr.24419) 188 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:34.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:34 vm11 bash[23232]: audit 2026-03-08T23:07:32.232684+0000 mgr.y (mgr.24419) 188 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:35 vm06 bash[20625]: cluster 2026-03-08T23:07:33.667637+0000 mgr.y (mgr.24419) 189 : cluster [DBG] pgmap v122: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:35 vm06 bash[20625]: cluster 2026-03-08T23:07:33.667637+0000 mgr.y (mgr.24419) 189 : cluster [DBG] pgmap v122: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:35 vm06 bash[27746]: cluster 2026-03-08T23:07:33.667637+0000 mgr.y (mgr.24419) 189 : cluster [DBG] pgmap v122: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:35 vm06 bash[27746]: cluster 2026-03-08T23:07:33.667637+0000 mgr.y (mgr.24419) 189 : cluster [DBG] pgmap v122: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:35 vm11 bash[23232]: cluster 2026-03-08T23:07:33.667637+0000 mgr.y (mgr.24419) 189 : cluster [DBG] pgmap v122: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:35 vm11 bash[23232]: cluster 2026-03-08T23:07:33.667637+0000 mgr.y (mgr.24419) 189 : cluster [DBG] pgmap v122: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:36.339 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:07:36.520 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:36.520 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== ']' 2026-03-08T23:07:36.520 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:37 vm06 bash[20625]: cluster 2026-03-08T23:07:35.668169+0000 mgr.y (mgr.24419) 190 : cluster [DBG] pgmap v123: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:37 vm06 bash[20625]: cluster 2026-03-08T23:07:35.668169+0000 mgr.y (mgr.24419) 190 : cluster [DBG] pgmap v123: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:37 vm06 bash[20625]: audit 2026-03-08T23:07:36.512695+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.106:0/3587915363' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:37 vm06 bash[20625]: audit 2026-03-08T23:07:36.512695+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.106:0/3587915363' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:37 vm06 bash[27746]: cluster 2026-03-08T23:07:35.668169+0000 mgr.y (mgr.24419) 190 : cluster [DBG] pgmap v123: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:37 vm06 bash[27746]: cluster 2026-03-08T23:07:35.668169+0000 mgr.y (mgr.24419) 190 : cluster [DBG] pgmap v123: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:37 vm06 bash[27746]: audit 2026-03-08T23:07:36.512695+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.106:0/3587915363' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:37 vm06 bash[27746]: audit 2026-03-08T23:07:36.512695+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.106:0/3587915363' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:37 vm11 bash[23232]: cluster 2026-03-08T23:07:35.668169+0000 mgr.y (mgr.24419) 190 : cluster [DBG] pgmap v123: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:37 vm11 bash[23232]: cluster 2026-03-08T23:07:35.668169+0000 mgr.y (mgr.24419) 190 : cluster [DBG] pgmap v123: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:37 vm11 bash[23232]: audit 2026-03-08T23:07:36.512695+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.106:0/3587915363' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:37 vm11 bash[23232]: audit 2026-03-08T23:07:36.512695+0000 mon.a (mon.0) 853 : audit [INF] from='client.? 192.168.123.106:0/3587915363' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:38.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:38 vm06 bash[20625]: audit 2026-03-08T23:07:37.834479+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:38.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:38 vm06 bash[20625]: audit 2026-03-08T23:07:37.834479+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:38.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:38 vm06 bash[27746]: audit 2026-03-08T23:07:37.834479+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:38.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:38 vm06 bash[27746]: audit 2026-03-08T23:07:37.834479+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:38.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:38 vm11 bash[23232]: audit 2026-03-08T23:07:37.834479+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:38.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:38 vm11 bash[23232]: audit 2026-03-08T23:07:37.834479+0000 mon.c (mon.2) 109 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:39 vm06 bash[20625]: cluster 2026-03-08T23:07:37.668416+0000 mgr.y (mgr.24419) 191 : cluster [DBG] pgmap v124: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:39 vm06 bash[20625]: cluster 2026-03-08T23:07:37.668416+0000 mgr.y (mgr.24419) 191 : cluster [DBG] pgmap v124: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:39 vm06 bash[27746]: cluster 2026-03-08T23:07:37.668416+0000 mgr.y (mgr.24419) 191 : cluster [DBG] pgmap v124: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:39 vm06 bash[27746]: cluster 2026-03-08T23:07:37.668416+0000 mgr.y (mgr.24419) 191 : cluster [DBG] pgmap v124: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:39.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:39 vm11 bash[23232]: cluster 2026-03-08T23:07:37.668416+0000 mgr.y (mgr.24419) 191 : cluster [DBG] pgmap v124: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:39.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:39 vm11 bash[23232]: cluster 2026-03-08T23:07:37.668416+0000 mgr.y (mgr.24419) 191 : cluster [DBG] pgmap v124: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:07:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:07:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:07:41.522 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:07:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:41 vm06 bash[20625]: cluster 2026-03-08T23:07:39.668646+0000 mgr.y (mgr.24419) 192 : cluster [DBG] pgmap v125: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:41 vm06 bash[20625]: cluster 2026-03-08T23:07:39.668646+0000 mgr.y (mgr.24419) 192 : cluster [DBG] pgmap v125: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:41 vm06 bash[27746]: cluster 2026-03-08T23:07:39.668646+0000 mgr.y (mgr.24419) 192 : cluster [DBG] pgmap v125: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:41 vm06 bash[27746]: cluster 2026-03-08T23:07:39.668646+0000 mgr.y (mgr.24419) 192 : cluster [DBG] pgmap v125: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:41 vm11 bash[23232]: cluster 2026-03-08T23:07:39.668646+0000 mgr.y (mgr.24419) 192 : cluster [DBG] pgmap v125: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:41 vm11 bash[23232]: cluster 2026-03-08T23:07:39.668646+0000 mgr.y (mgr.24419) 192 : cluster [DBG] pgmap v125: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:41.723 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:41.723 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== ']' 2026-03-08T23:07:41.723 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:42 vm06 bash[20625]: audit 2026-03-08T23:07:41.714660+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.106:0/677413026' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:42 vm06 bash[20625]: audit 2026-03-08T23:07:41.714660+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.106:0/677413026' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:42.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:42 vm06 bash[27746]: audit 2026-03-08T23:07:41.714660+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.106:0/677413026' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:42.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:42 vm06 bash[27746]: audit 2026-03-08T23:07:41.714660+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.106:0/677413026' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:42.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:42 vm11 bash[23232]: audit 2026-03-08T23:07:41.714660+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.106:0/677413026' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:42.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:42 vm11 bash[23232]: audit 2026-03-08T23:07:41.714660+0000 mon.a (mon.0) 854 : audit [INF] from='client.? 192.168.123.106:0/677413026' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:42.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:07:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:07:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:43 vm06 bash[20625]: cluster 2026-03-08T23:07:41.669176+0000 mgr.y (mgr.24419) 193 : cluster [DBG] pgmap v126: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:43 vm06 bash[20625]: cluster 2026-03-08T23:07:41.669176+0000 mgr.y (mgr.24419) 193 : cluster [DBG] pgmap v126: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:43 vm06 bash[27746]: cluster 2026-03-08T23:07:41.669176+0000 mgr.y (mgr.24419) 193 : cluster [DBG] pgmap v126: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:43 vm06 bash[27746]: cluster 2026-03-08T23:07:41.669176+0000 mgr.y (mgr.24419) 193 : cluster [DBG] pgmap v126: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:43.557 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:43 vm11 bash[23232]: cluster 2026-03-08T23:07:41.669176+0000 mgr.y (mgr.24419) 193 : cluster [DBG] pgmap v126: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:43.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:43 vm11 bash[23232]: cluster 2026-03-08T23:07:41.669176+0000 mgr.y (mgr.24419) 193 : cluster [DBG] pgmap v126: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:44 vm06 bash[20625]: audit 2026-03-08T23:07:42.243223+0000 mgr.y (mgr.24419) 194 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:44 vm06 bash[20625]: audit 2026-03-08T23:07:42.243223+0000 mgr.y (mgr.24419) 194 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:44.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:44 vm06 bash[27746]: audit 2026-03-08T23:07:42.243223+0000 mgr.y (mgr.24419) 194 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:44.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:44 vm06 bash[27746]: audit 2026-03-08T23:07:42.243223+0000 mgr.y (mgr.24419) 194 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:44.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:44 vm11 bash[23232]: audit 2026-03-08T23:07:42.243223+0000 mgr.y (mgr.24419) 194 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:44.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:44 vm11 bash[23232]: audit 2026-03-08T23:07:42.243223+0000 mgr.y (mgr.24419) 194 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:45 vm06 bash[20625]: cluster 2026-03-08T23:07:43.669458+0000 mgr.y (mgr.24419) 195 : cluster [DBG] pgmap v127: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:45 vm06 bash[20625]: cluster 2026-03-08T23:07:43.669458+0000 mgr.y (mgr.24419) 195 : cluster [DBG] pgmap v127: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:45.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:45 vm06 bash[27746]: cluster 2026-03-08T23:07:43.669458+0000 mgr.y (mgr.24419) 195 : cluster [DBG] pgmap v127: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:45.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:45 vm06 bash[27746]: cluster 2026-03-08T23:07:43.669458+0000 mgr.y (mgr.24419) 195 : cluster [DBG] pgmap v127: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:45.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:45 vm11 bash[23232]: cluster 2026-03-08T23:07:43.669458+0000 mgr.y (mgr.24419) 195 : cluster [DBG] pgmap v127: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:45.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:45 vm11 bash[23232]: cluster 2026-03-08T23:07:43.669458+0000 mgr.y (mgr.24419) 195 : cluster [DBG] pgmap v127: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:46.725 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:07:46.915 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:46.915 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== ']' 2026-03-08T23:07:46.915 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:47 vm06 bash[20625]: cluster 2026-03-08T23:07:45.669885+0000 mgr.y (mgr.24419) 196 : cluster [DBG] pgmap v128: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:47 vm06 bash[20625]: cluster 2026-03-08T23:07:45.669885+0000 mgr.y (mgr.24419) 196 : cluster [DBG] pgmap v128: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:47 vm06 bash[20625]: audit 2026-03-08T23:07:46.903230+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.106:0/3297777500' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:47 vm06 bash[20625]: audit 2026-03-08T23:07:46.903230+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.106:0/3297777500' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:47 vm06 bash[27746]: cluster 2026-03-08T23:07:45.669885+0000 mgr.y (mgr.24419) 196 : cluster [DBG] pgmap v128: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:47 vm06 bash[27746]: cluster 2026-03-08T23:07:45.669885+0000 mgr.y (mgr.24419) 196 : cluster [DBG] pgmap v128: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:47 vm06 bash[27746]: audit 2026-03-08T23:07:46.903230+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.106:0/3297777500' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:47.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:47 vm06 bash[27746]: audit 2026-03-08T23:07:46.903230+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.106:0/3297777500' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:47.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:47 vm11 bash[23232]: cluster 2026-03-08T23:07:45.669885+0000 mgr.y (mgr.24419) 196 : cluster [DBG] pgmap v128: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:47.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:47 vm11 bash[23232]: cluster 2026-03-08T23:07:45.669885+0000 mgr.y (mgr.24419) 196 : cluster [DBG] pgmap v128: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:47.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:47 vm11 bash[23232]: audit 2026-03-08T23:07:46.903230+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.106:0/3297777500' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:47.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:47 vm11 bash[23232]: audit 2026-03-08T23:07:46.903230+0000 mon.b (mon.1) 43 : audit [INF] from='client.? 192.168.123.106:0/3297777500' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:49 vm06 bash[20625]: cluster 2026-03-08T23:07:47.670132+0000 mgr.y (mgr.24419) 197 : cluster [DBG] pgmap v129: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:49 vm06 bash[20625]: cluster 2026-03-08T23:07:47.670132+0000 mgr.y (mgr.24419) 197 : cluster [DBG] pgmap v129: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:49 vm06 bash[27746]: cluster 2026-03-08T23:07:47.670132+0000 mgr.y (mgr.24419) 197 : cluster [DBG] pgmap v129: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:49 vm06 bash[27746]: cluster 2026-03-08T23:07:47.670132+0000 mgr.y (mgr.24419) 197 : cluster [DBG] pgmap v129: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:49 vm11 bash[23232]: cluster 2026-03-08T23:07:47.670132+0000 mgr.y (mgr.24419) 197 : cluster [DBG] pgmap v129: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:49 vm11 bash[23232]: cluster 2026-03-08T23:07:47.670132+0000 mgr.y (mgr.24419) 197 : cluster [DBG] pgmap v129: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:07:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:07:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:07:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:51 vm06 bash[20625]: cluster 2026-03-08T23:07:49.670409+0000 mgr.y (mgr.24419) 198 : cluster [DBG] pgmap v130: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:51 vm06 bash[20625]: cluster 2026-03-08T23:07:49.670409+0000 mgr.y (mgr.24419) 198 : cluster [DBG] pgmap v130: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:51 vm06 bash[27746]: cluster 2026-03-08T23:07:49.670409+0000 mgr.y (mgr.24419) 198 : cluster [DBG] pgmap v130: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:51 vm06 bash[27746]: cluster 2026-03-08T23:07:49.670409+0000 mgr.y (mgr.24419) 198 : cluster [DBG] pgmap v130: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:51 vm11 bash[23232]: cluster 2026-03-08T23:07:49.670409+0000 mgr.y (mgr.24419) 198 : cluster [DBG] pgmap v130: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:51 vm11 bash[23232]: cluster 2026-03-08T23:07:49.670409+0000 mgr.y (mgr.24419) 198 : cluster [DBG] pgmap v130: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:51.916 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:07:52.100 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:52.101 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== ']' 2026-03-08T23:07:52.101 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:52 vm06 bash[20625]: audit 2026-03-08T23:07:52.092691+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.106:0/4174491587' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:52 vm06 bash[20625]: audit 2026-03-08T23:07:52.092691+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.106:0/4174491587' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:52.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:52 vm06 bash[27746]: audit 2026-03-08T23:07:52.092691+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.106:0/4174491587' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:52.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:52 vm06 bash[27746]: audit 2026-03-08T23:07:52.092691+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.106:0/4174491587' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:52 vm11 bash[23232]: audit 2026-03-08T23:07:52.092691+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.106:0/4174491587' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:52 vm11 bash[23232]: audit 2026-03-08T23:07:52.092691+0000 mon.a (mon.0) 855 : audit [INF] from='client.? 192.168.123.106:0/4174491587' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:52.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:07:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:53 vm06 bash[20625]: cluster 2026-03-08T23:07:51.670905+0000 mgr.y (mgr.24419) 199 : cluster [DBG] pgmap v131: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:53 vm06 bash[20625]: cluster 2026-03-08T23:07:51.670905+0000 mgr.y (mgr.24419) 199 : cluster [DBG] pgmap v131: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:53 vm06 bash[20625]: audit 2026-03-08T23:07:52.840992+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:53 vm06 bash[20625]: audit 2026-03-08T23:07:52.840992+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:53 vm06 bash[27746]: cluster 2026-03-08T23:07:51.670905+0000 mgr.y (mgr.24419) 199 : cluster [DBG] pgmap v131: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:53 vm06 bash[27746]: cluster 2026-03-08T23:07:51.670905+0000 mgr.y (mgr.24419) 199 : cluster [DBG] pgmap v131: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:53 vm06 bash[27746]: audit 2026-03-08T23:07:52.840992+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:53 vm06 bash[27746]: audit 2026-03-08T23:07:52.840992+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:53 vm11 bash[23232]: cluster 2026-03-08T23:07:51.670905+0000 mgr.y (mgr.24419) 199 : cluster [DBG] pgmap v131: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:53 vm11 bash[23232]: cluster 2026-03-08T23:07:51.670905+0000 mgr.y (mgr.24419) 199 : cluster [DBG] pgmap v131: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:53 vm11 bash[23232]: audit 2026-03-08T23:07:52.840992+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:53 vm11 bash[23232]: audit 2026-03-08T23:07:52.840992+0000 mon.c (mon.2) 110 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:54 vm06 bash[20625]: audit 2026-03-08T23:07:52.253884+0000 mgr.y (mgr.24419) 200 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:54 vm06 bash[20625]: audit 2026-03-08T23:07:52.253884+0000 mgr.y (mgr.24419) 200 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:54 vm06 bash[20625]: cluster 2026-03-08T23:07:53.671224+0000 mgr.y (mgr.24419) 201 : cluster [DBG] pgmap v132: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:54 vm06 bash[20625]: cluster 2026-03-08T23:07:53.671224+0000 mgr.y (mgr.24419) 201 : cluster [DBG] pgmap v132: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:54 vm06 bash[27746]: audit 2026-03-08T23:07:52.253884+0000 mgr.y (mgr.24419) 200 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:54 vm06 bash[27746]: audit 2026-03-08T23:07:52.253884+0000 mgr.y (mgr.24419) 200 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:54 vm06 bash[27746]: cluster 2026-03-08T23:07:53.671224+0000 mgr.y (mgr.24419) 201 : cluster [DBG] pgmap v132: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:54 vm06 bash[27746]: cluster 2026-03-08T23:07:53.671224+0000 mgr.y (mgr.24419) 201 : cluster [DBG] pgmap v132: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:54 vm11 bash[23232]: audit 2026-03-08T23:07:52.253884+0000 mgr.y (mgr.24419) 200 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:54 vm11 bash[23232]: audit 2026-03-08T23:07:52.253884+0000 mgr.y (mgr.24419) 200 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:07:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:54 vm11 bash[23232]: cluster 2026-03-08T23:07:53.671224+0000 mgr.y (mgr.24419) 201 : cluster [DBG] pgmap v132: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:54 vm11 bash[23232]: cluster 2026-03-08T23:07:53.671224+0000 mgr.y (mgr.24419) 201 : cluster [DBG] pgmap v132: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:57.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:56 vm06 bash[20625]: cluster 2026-03-08T23:07:55.671682+0000 mgr.y (mgr.24419) 202 : cluster [DBG] pgmap v133: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:57.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:56 vm06 bash[20625]: cluster 2026-03-08T23:07:55.671682+0000 mgr.y (mgr.24419) 202 : cluster [DBG] pgmap v133: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:57.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:56 vm06 bash[27746]: cluster 2026-03-08T23:07:55.671682+0000 mgr.y (mgr.24419) 202 : cluster [DBG] pgmap v133: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:57.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:56 vm06 bash[27746]: cluster 2026-03-08T23:07:55.671682+0000 mgr.y (mgr.24419) 202 : cluster [DBG] pgmap v133: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:57.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:56 vm11 bash[23232]: cluster 2026-03-08T23:07:55.671682+0000 mgr.y (mgr.24419) 202 : cluster [DBG] pgmap v133: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:57.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:56 vm11 bash[23232]: cluster 2026-03-08T23:07:55.671682+0000 mgr.y (mgr.24419) 202 : cluster [DBG] pgmap v133: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:07:57.103 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:07:57.288 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== 2026-03-08T23:07:57.288 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== ']' 2026-03-08T23:07:57.288 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:07:58.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:57 vm06 bash[20625]: audit 2026-03-08T23:07:57.278026+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.106:0/1371513580' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:58.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:57 vm06 bash[20625]: audit 2026-03-08T23:07:57.278026+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.106:0/1371513580' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:58.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:57 vm06 bash[27746]: audit 2026-03-08T23:07:57.278026+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.106:0/1371513580' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:58.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:57 vm06 bash[27746]: audit 2026-03-08T23:07:57.278026+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.106:0/1371513580' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:58.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:57 vm11 bash[23232]: audit 2026-03-08T23:07:57.278026+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.106:0/1371513580' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:58.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:57 vm11 bash[23232]: audit 2026-03-08T23:07:57.278026+0000 mon.b (mon.1) 44 : audit [INF] from='client.? 192.168.123.106:0/1371513580' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:07:59.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:58 vm06 bash[20625]: cluster 2026-03-08T23:07:57.671982+0000 mgr.y (mgr.24419) 203 : cluster [DBG] pgmap v134: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:59.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:07:58 vm06 bash[20625]: cluster 2026-03-08T23:07:57.671982+0000 mgr.y (mgr.24419) 203 : cluster [DBG] pgmap v134: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:59.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:58 vm06 bash[27746]: cluster 2026-03-08T23:07:57.671982+0000 mgr.y (mgr.24419) 203 : cluster [DBG] pgmap v134: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:59.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:07:58 vm06 bash[27746]: cluster 2026-03-08T23:07:57.671982+0000 mgr.y (mgr.24419) 203 : cluster [DBG] pgmap v134: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:59.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:58 vm11 bash[23232]: cluster 2026-03-08T23:07:57.671982+0000 mgr.y (mgr.24419) 203 : cluster [DBG] pgmap v134: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:07:59.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:07:58 vm11 bash[23232]: cluster 2026-03-08T23:07:57.671982+0000 mgr.y (mgr.24419) 203 : cluster [DBG] pgmap v134: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:00.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:00 vm06 bash[20625]: cluster 2026-03-08T23:07:59.672242+0000 mgr.y (mgr.24419) 204 : cluster [DBG] pgmap v135: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:00.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:00 vm06 bash[20625]: cluster 2026-03-08T23:07:59.672242+0000 mgr.y (mgr.24419) 204 : cluster [DBG] pgmap v135: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:00.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:00 vm06 bash[27746]: cluster 2026-03-08T23:07:59.672242+0000 mgr.y (mgr.24419) 204 : cluster [DBG] pgmap v135: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:00.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:00 vm06 bash[27746]: cluster 2026-03-08T23:07:59.672242+0000 mgr.y (mgr.24419) 204 : cluster [DBG] pgmap v135: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:00.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:00 vm11 bash[23232]: cluster 2026-03-08T23:07:59.672242+0000 mgr.y (mgr.24419) 204 : cluster [DBG] pgmap v135: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:00.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:00 vm11 bash[23232]: cluster 2026-03-08T23:07:59.672242+0000 mgr.y (mgr.24419) 204 : cluster [DBG] pgmap v135: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:01.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:08:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:08:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:08:02.290 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.2 2026-03-08T23:08:02.487 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAzAa5pufdfJRAAGq+a98VUYFTErNM15luBUg== 2026-03-08T23:08:02.487 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBR/61p/b4ROhAAxGxKRg4jQ1WTOfDrnvTvgQ== == AQAzAa5pufdfJRAAGq+a98VUYFTErNM15luBUg== ']' 2026-03-08T23:08:02.487 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:08:02.488 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.3' 2026-03-08T23:08:02.488 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.3 2026-03-08T23:08:02.488 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:02.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:08:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:08:02.673 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:02.673 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:02.674 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.3 2026-03-08T23:08:02.897 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.3 on host 'vm06' 2026-03-08T23:08:02.910 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:02.910 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:02 vm06 bash[20625]: cluster 2026-03-08T23:08:01.672732+0000 mgr.y (mgr.24419) 205 : cluster [DBG] pgmap v136: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:02 vm06 bash[20625]: cluster 2026-03-08T23:08:01.672732+0000 mgr.y (mgr.24419) 205 : cluster [DBG] pgmap v136: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:02 vm06 bash[20625]: audit 2026-03-08T23:08:02.478459+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.106:0/3473320908' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:02 vm06 bash[20625]: audit 2026-03-08T23:08:02.478459+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.106:0/3473320908' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:02 vm06 bash[20625]: audit 2026-03-08T23:08:02.665788+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.106:0/2053093518' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:02 vm06 bash[20625]: audit 2026-03-08T23:08:02.665788+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.106:0/2053093518' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:02 vm06 bash[27746]: cluster 2026-03-08T23:08:01.672732+0000 mgr.y (mgr.24419) 205 : cluster [DBG] pgmap v136: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:02 vm06 bash[27746]: cluster 2026-03-08T23:08:01.672732+0000 mgr.y (mgr.24419) 205 : cluster [DBG] pgmap v136: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:02 vm06 bash[27746]: audit 2026-03-08T23:08:02.478459+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.106:0/3473320908' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:02 vm06 bash[27746]: audit 2026-03-08T23:08:02.478459+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.106:0/3473320908' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:02 vm06 bash[27746]: audit 2026-03-08T23:08:02.665788+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.106:0/2053093518' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:03.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:02 vm06 bash[27746]: audit 2026-03-08T23:08:02.665788+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.106:0/2053093518' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:02 vm11 bash[23232]: cluster 2026-03-08T23:08:01.672732+0000 mgr.y (mgr.24419) 205 : cluster [DBG] pgmap v136: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:02 vm11 bash[23232]: cluster 2026-03-08T23:08:01.672732+0000 mgr.y (mgr.24419) 205 : cluster [DBG] pgmap v136: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:02 vm11 bash[23232]: audit 2026-03-08T23:08:02.478459+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.106:0/3473320908' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:08:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:02 vm11 bash[23232]: audit 2026-03-08T23:08:02.478459+0000 mon.c (mon.2) 111 : audit [INF] from='client.? 192.168.123.106:0/3473320908' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.2"}]: dispatch 2026-03-08T23:08:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:02 vm11 bash[23232]: audit 2026-03-08T23:08:02.665788+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.106:0/2053093518' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:03.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:02 vm11 bash[23232]: audit 2026-03-08T23:08:02.665788+0000 mon.a (mon.0) 856 : audit [INF] from='client.? 192.168.123.106:0/2053093518' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.262791+0000 mgr.y (mgr.24419) 206 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.262791+0000 mgr.y (mgr.24419) 206 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.830854+0000 mgr.y (mgr.24419) 207 : audit [DBG] from='client.24812 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.3", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.830854+0000 mgr.y (mgr.24419) 207 : audit [DBG] from='client.24812 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.3", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: cephadm 2026-03-08T23:08:02.831294+0000 mgr.y (mgr.24419) 208 : cephadm [INF] Schedule rotate-key daemon osd.3 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: cephadm 2026-03-08T23:08:02.831294+0000 mgr.y (mgr.24419) 208 : cephadm [INF] Schedule rotate-key daemon osd.3 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.842688+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.842688+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.888196+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.888196+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.895395+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:02.895395+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.217989+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.217989+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.219173+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.219173+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.224991+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.224991+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.239298+0000 mon.c (mon.2) 115 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:03.846 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.239298+0000 mon.c (mon.2) 115 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.262791+0000 mgr.y (mgr.24419) 206 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.262791+0000 mgr.y (mgr.24419) 206 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.830854+0000 mgr.y (mgr.24419) 207 : audit [DBG] from='client.24812 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.3", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.830854+0000 mgr.y (mgr.24419) 207 : audit [DBG] from='client.24812 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.3", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: cephadm 2026-03-08T23:08:02.831294+0000 mgr.y (mgr.24419) 208 : cephadm [INF] Schedule rotate-key daemon osd.3 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: cephadm 2026-03-08T23:08:02.831294+0000 mgr.y (mgr.24419) 208 : cephadm [INF] Schedule rotate-key daemon osd.3 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.842688+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.842688+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.888196+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.888196+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.895395+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:02.895395+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.217989+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.217989+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.219173+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.219173+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.224991+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.224991+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.239298+0000 mon.c (mon.2) 115 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.239298+0000 mon.c (mon.2) 115 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.239656+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.239656+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.242108+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]': finished 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.242108+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]': finished 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.627102+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.627102+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.635964+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.635964+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.806536+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.806536+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.814975+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:03 vm06 bash[20625]: audit 2026-03-08T23:08:03.814975+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.262791+0000 mgr.y (mgr.24419) 206 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.262791+0000 mgr.y (mgr.24419) 206 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.830854+0000 mgr.y (mgr.24419) 207 : audit [DBG] from='client.24812 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.3", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.830854+0000 mgr.y (mgr.24419) 207 : audit [DBG] from='client.24812 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.3", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: cephadm 2026-03-08T23:08:02.831294+0000 mgr.y (mgr.24419) 208 : cephadm [INF] Schedule rotate-key daemon osd.3 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: cephadm 2026-03-08T23:08:02.831294+0000 mgr.y (mgr.24419) 208 : cephadm [INF] Schedule rotate-key daemon osd.3 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.842688+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.842688+0000 mon.a (mon.0) 857 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.888196+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.888196+0000 mon.a (mon.0) 858 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.895395+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:02.895395+0000 mon.c (mon.2) 112 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.217989+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.217989+0000 mon.c (mon.2) 113 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.219173+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.219173+0000 mon.c (mon.2) 114 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.224991+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.224991+0000 mon.a (mon.0) 859 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.239298+0000 mon.c (mon.2) 115 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.239298+0000 mon.c (mon.2) 115 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.239656+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.239656+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.242108+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]': finished 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.242108+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]': finished 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.627102+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.627102+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.635964+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.635964+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.806536+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.806536+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.814975+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:03 vm06 bash[27746]: audit 2026-03-08T23:08:03.814975+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.239656+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.239656+0000 mon.a (mon.0) 860 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]: dispatch 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.242108+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]': finished 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.242108+0000 mon.a (mon.0) 861 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.3", "format": "json"}]': finished 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.627102+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.627102+0000 mon.a (mon.0) 862 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.635964+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.635964+0000 mon.a (mon.0) 863 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.806536+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.806536+0000 mon.a (mon.0) 864 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.814975+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:03 vm11 bash[23232]: audit 2026-03-08T23:08:03.814975+0000 mon.a (mon.0) 865 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:04 vm06 bash[20625]: cephadm 2026-03-08T23:08:03.238966+0000 mgr.y (mgr.24419) 209 : cephadm [INF] Rotating authentication key for osd.3 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:04 vm06 bash[20625]: cephadm 2026-03-08T23:08:03.238966+0000 mgr.y (mgr.24419) 209 : cephadm [INF] Rotating authentication key for osd.3 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:04 vm06 bash[20625]: cephadm 2026-03-08T23:08:03.246120+0000 mgr.y (mgr.24419) 210 : cephadm [INF] Reconfiguring daemon osd.3 on vm06 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:04 vm06 bash[20625]: cephadm 2026-03-08T23:08:03.246120+0000 mgr.y (mgr.24419) 210 : cephadm [INF] Reconfiguring daemon osd.3 on vm06 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:04 vm06 bash[20625]: cluster 2026-03-08T23:08:03.673012+0000 mgr.y (mgr.24419) 211 : cluster [DBG] pgmap v137: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:04 vm06 bash[20625]: cluster 2026-03-08T23:08:03.673012+0000 mgr.y (mgr.24419) 211 : cluster [DBG] pgmap v137: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:04 vm06 bash[27746]: cephadm 2026-03-08T23:08:03.238966+0000 mgr.y (mgr.24419) 209 : cephadm [INF] Rotating authentication key for osd.3 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:04 vm06 bash[27746]: cephadm 2026-03-08T23:08:03.238966+0000 mgr.y (mgr.24419) 209 : cephadm [INF] Rotating authentication key for osd.3 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:04 vm06 bash[27746]: cephadm 2026-03-08T23:08:03.246120+0000 mgr.y (mgr.24419) 210 : cephadm [INF] Reconfiguring daemon osd.3 on vm06 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:04 vm06 bash[27746]: cephadm 2026-03-08T23:08:03.246120+0000 mgr.y (mgr.24419) 210 : cephadm [INF] Reconfiguring daemon osd.3 on vm06 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:04 vm06 bash[27746]: cluster 2026-03-08T23:08:03.673012+0000 mgr.y (mgr.24419) 211 : cluster [DBG] pgmap v137: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:04 vm06 bash[27746]: cluster 2026-03-08T23:08:03.673012+0000 mgr.y (mgr.24419) 211 : cluster [DBG] pgmap v137: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:04 vm11 bash[23232]: cephadm 2026-03-08T23:08:03.238966+0000 mgr.y (mgr.24419) 209 : cephadm [INF] Rotating authentication key for osd.3 2026-03-08T23:08:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:04 vm11 bash[23232]: cephadm 2026-03-08T23:08:03.238966+0000 mgr.y (mgr.24419) 209 : cephadm [INF] Rotating authentication key for osd.3 2026-03-08T23:08:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:04 vm11 bash[23232]: cephadm 2026-03-08T23:08:03.246120+0000 mgr.y (mgr.24419) 210 : cephadm [INF] Reconfiguring daemon osd.3 on vm06 2026-03-08T23:08:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:04 vm11 bash[23232]: cephadm 2026-03-08T23:08:03.246120+0000 mgr.y (mgr.24419) 210 : cephadm [INF] Reconfiguring daemon osd.3 on vm06 2026-03-08T23:08:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:04 vm11 bash[23232]: cluster 2026-03-08T23:08:03.673012+0000 mgr.y (mgr.24419) 211 : cluster [DBG] pgmap v137: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:04 vm11 bash[23232]: cluster 2026-03-08T23:08:03.673012+0000 mgr.y (mgr.24419) 211 : cluster [DBG] pgmap v137: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:07.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:06 vm06 bash[20625]: cluster 2026-03-08T23:08:05.673490+0000 mgr.y (mgr.24419) 212 : cluster [DBG] pgmap v138: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:07.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:06 vm06 bash[20625]: cluster 2026-03-08T23:08:05.673490+0000 mgr.y (mgr.24419) 212 : cluster [DBG] pgmap v138: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:07.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:06 vm06 bash[27746]: cluster 2026-03-08T23:08:05.673490+0000 mgr.y (mgr.24419) 212 : cluster [DBG] pgmap v138: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:07.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:06 vm06 bash[27746]: cluster 2026-03-08T23:08:05.673490+0000 mgr.y (mgr.24419) 212 : cluster [DBG] pgmap v138: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:07.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:06 vm11 bash[23232]: cluster 2026-03-08T23:08:05.673490+0000 mgr.y (mgr.24419) 212 : cluster [DBG] pgmap v138: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:07.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:06 vm11 bash[23232]: cluster 2026-03-08T23:08:05.673490+0000 mgr.y (mgr.24419) 212 : cluster [DBG] pgmap v138: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:07.911 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:08.107 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:08.107 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:08.107 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:08.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:07 vm06 bash[20625]: audit 2026-03-08T23:08:07.848084+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:08.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:07 vm06 bash[20625]: audit 2026-03-08T23:08:07.848084+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:08.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:07 vm06 bash[27746]: audit 2026-03-08T23:08:07.848084+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:08.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:07 vm06 bash[27746]: audit 2026-03-08T23:08:07.848084+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:08.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:07 vm11 bash[23232]: audit 2026-03-08T23:08:07.848084+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:08.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:07 vm11 bash[23232]: audit 2026-03-08T23:08:07.848084+0000 mon.c (mon.2) 116 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:08 vm06 bash[20625]: cluster 2026-03-08T23:08:07.673758+0000 mgr.y (mgr.24419) 213 : cluster [DBG] pgmap v139: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:08 vm06 bash[20625]: cluster 2026-03-08T23:08:07.673758+0000 mgr.y (mgr.24419) 213 : cluster [DBG] pgmap v139: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:08 vm06 bash[20625]: audit 2026-03-08T23:08:08.098231+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.106:0/1363631943' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:08 vm06 bash[20625]: audit 2026-03-08T23:08:08.098231+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.106:0/1363631943' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:08 vm06 bash[27746]: cluster 2026-03-08T23:08:07.673758+0000 mgr.y (mgr.24419) 213 : cluster [DBG] pgmap v139: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:08 vm06 bash[27746]: cluster 2026-03-08T23:08:07.673758+0000 mgr.y (mgr.24419) 213 : cluster [DBG] pgmap v139: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:08 vm06 bash[27746]: audit 2026-03-08T23:08:08.098231+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.106:0/1363631943' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:08 vm06 bash[27746]: audit 2026-03-08T23:08:08.098231+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.106:0/1363631943' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:08 vm11 bash[23232]: cluster 2026-03-08T23:08:07.673758+0000 mgr.y (mgr.24419) 213 : cluster [DBG] pgmap v139: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:08 vm11 bash[23232]: cluster 2026-03-08T23:08:07.673758+0000 mgr.y (mgr.24419) 213 : cluster [DBG] pgmap v139: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:08 vm11 bash[23232]: audit 2026-03-08T23:08:08.098231+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.106:0/1363631943' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:08 vm11 bash[23232]: audit 2026-03-08T23:08:08.098231+0000 mon.c (mon.2) 117 : audit [INF] from='client.? 192.168.123.106:0/1363631943' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:10 vm06 bash[20625]: cluster 2026-03-08T23:08:09.674022+0000 mgr.y (mgr.24419) 214 : cluster [DBG] pgmap v140: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:10 vm06 bash[20625]: cluster 2026-03-08T23:08:09.674022+0000 mgr.y (mgr.24419) 214 : cluster [DBG] pgmap v140: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:08:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:08:10] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:08:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:10 vm06 bash[27746]: cluster 2026-03-08T23:08:09.674022+0000 mgr.y (mgr.24419) 214 : cluster [DBG] pgmap v140: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:10 vm06 bash[27746]: cluster 2026-03-08T23:08:09.674022+0000 mgr.y (mgr.24419) 214 : cluster [DBG] pgmap v140: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:11.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:10 vm11 bash[23232]: cluster 2026-03-08T23:08:09.674022+0000 mgr.y (mgr.24419) 214 : cluster [DBG] pgmap v140: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:11.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:10 vm11 bash[23232]: cluster 2026-03-08T23:08:09.674022+0000 mgr.y (mgr.24419) 214 : cluster [DBG] pgmap v140: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:12.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:08:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:08:13.108 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:12 vm06 bash[20625]: cluster 2026-03-08T23:08:11.674401+0000 mgr.y (mgr.24419) 215 : cluster [DBG] pgmap v141: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:12 vm06 bash[20625]: cluster 2026-03-08T23:08:11.674401+0000 mgr.y (mgr.24419) 215 : cluster [DBG] pgmap v141: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:12 vm06 bash[27746]: cluster 2026-03-08T23:08:11.674401+0000 mgr.y (mgr.24419) 215 : cluster [DBG] pgmap v141: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:12 vm06 bash[27746]: cluster 2026-03-08T23:08:11.674401+0000 mgr.y (mgr.24419) 215 : cluster [DBG] pgmap v141: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:13.291 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:13.291 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:13.291 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:13.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:12 vm11 bash[23232]: cluster 2026-03-08T23:08:11.674401+0000 mgr.y (mgr.24419) 215 : cluster [DBG] pgmap v141: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:13.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:12 vm11 bash[23232]: cluster 2026-03-08T23:08:11.674401+0000 mgr.y (mgr.24419) 215 : cluster [DBG] pgmap v141: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:13 vm06 bash[20625]: audit 2026-03-08T23:08:12.273409+0000 mgr.y (mgr.24419) 216 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:13 vm06 bash[20625]: audit 2026-03-08T23:08:12.273409+0000 mgr.y (mgr.24419) 216 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:13 vm06 bash[20625]: audit 2026-03-08T23:08:13.283275+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.106:0/234554511' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:13 vm06 bash[20625]: audit 2026-03-08T23:08:13.283275+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.106:0/234554511' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:13 vm06 bash[27746]: audit 2026-03-08T23:08:12.273409+0000 mgr.y (mgr.24419) 216 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:13 vm06 bash[27746]: audit 2026-03-08T23:08:12.273409+0000 mgr.y (mgr.24419) 216 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:13 vm06 bash[27746]: audit 2026-03-08T23:08:13.283275+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.106:0/234554511' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:13 vm06 bash[27746]: audit 2026-03-08T23:08:13.283275+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.106:0/234554511' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:13 vm11 bash[23232]: audit 2026-03-08T23:08:12.273409+0000 mgr.y (mgr.24419) 216 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:13 vm11 bash[23232]: audit 2026-03-08T23:08:12.273409+0000 mgr.y (mgr.24419) 216 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:13 vm11 bash[23232]: audit 2026-03-08T23:08:13.283275+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.106:0/234554511' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:13 vm11 bash[23232]: audit 2026-03-08T23:08:13.283275+0000 mon.c (mon.2) 118 : audit [INF] from='client.? 192.168.123.106:0/234554511' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:15.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:14 vm06 bash[20625]: cluster 2026-03-08T23:08:13.674676+0000 mgr.y (mgr.24419) 217 : cluster [DBG] pgmap v142: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:15.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:14 vm06 bash[20625]: cluster 2026-03-08T23:08:13.674676+0000 mgr.y (mgr.24419) 217 : cluster [DBG] pgmap v142: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:15.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:14 vm06 bash[27746]: cluster 2026-03-08T23:08:13.674676+0000 mgr.y (mgr.24419) 217 : cluster [DBG] pgmap v142: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:15.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:14 vm06 bash[27746]: cluster 2026-03-08T23:08:13.674676+0000 mgr.y (mgr.24419) 217 : cluster [DBG] pgmap v142: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:15.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:14 vm11 bash[23232]: cluster 2026-03-08T23:08:13.674676+0000 mgr.y (mgr.24419) 217 : cluster [DBG] pgmap v142: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:15.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:14 vm11 bash[23232]: cluster 2026-03-08T23:08:13.674676+0000 mgr.y (mgr.24419) 217 : cluster [DBG] pgmap v142: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:17.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:16 vm06 bash[20625]: cluster 2026-03-08T23:08:15.675112+0000 mgr.y (mgr.24419) 218 : cluster [DBG] pgmap v143: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:17.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:16 vm06 bash[20625]: cluster 2026-03-08T23:08:15.675112+0000 mgr.y (mgr.24419) 218 : cluster [DBG] pgmap v143: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:17.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:16 vm06 bash[27746]: cluster 2026-03-08T23:08:15.675112+0000 mgr.y (mgr.24419) 218 : cluster [DBG] pgmap v143: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:17.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:16 vm06 bash[27746]: cluster 2026-03-08T23:08:15.675112+0000 mgr.y (mgr.24419) 218 : cluster [DBG] pgmap v143: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:17.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:16 vm11 bash[23232]: cluster 2026-03-08T23:08:15.675112+0000 mgr.y (mgr.24419) 218 : cluster [DBG] pgmap v143: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:17.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:16 vm11 bash[23232]: cluster 2026-03-08T23:08:15.675112+0000 mgr.y (mgr.24419) 218 : cluster [DBG] pgmap v143: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:18.293 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:18.468 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:18.468 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:18.468 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:18 vm06 bash[20625]: cluster 2026-03-08T23:08:17.675397+0000 mgr.y (mgr.24419) 219 : cluster [DBG] pgmap v144: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:18 vm06 bash[20625]: cluster 2026-03-08T23:08:17.675397+0000 mgr.y (mgr.24419) 219 : cluster [DBG] pgmap v144: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:18 vm06 bash[20625]: audit 2026-03-08T23:08:18.460147+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.106:0/1645013291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:18 vm06 bash[20625]: audit 2026-03-08T23:08:18.460147+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.106:0/1645013291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:18 vm06 bash[27746]: cluster 2026-03-08T23:08:17.675397+0000 mgr.y (mgr.24419) 219 : cluster [DBG] pgmap v144: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:18 vm06 bash[27746]: cluster 2026-03-08T23:08:17.675397+0000 mgr.y (mgr.24419) 219 : cluster [DBG] pgmap v144: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:18 vm06 bash[27746]: audit 2026-03-08T23:08:18.460147+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.106:0/1645013291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:19.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:18 vm06 bash[27746]: audit 2026-03-08T23:08:18.460147+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.106:0/1645013291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:19.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:18 vm11 bash[23232]: cluster 2026-03-08T23:08:17.675397+0000 mgr.y (mgr.24419) 219 : cluster [DBG] pgmap v144: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:19.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:18 vm11 bash[23232]: cluster 2026-03-08T23:08:17.675397+0000 mgr.y (mgr.24419) 219 : cluster [DBG] pgmap v144: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:19.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:18 vm11 bash[23232]: audit 2026-03-08T23:08:18.460147+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.106:0/1645013291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:19.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:18 vm11 bash[23232]: audit 2026-03-08T23:08:18.460147+0000 mon.c (mon.2) 119 : audit [INF] from='client.? 192.168.123.106:0/1645013291' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:20 vm06 bash[20625]: cluster 2026-03-08T23:08:19.675670+0000 mgr.y (mgr.24419) 220 : cluster [DBG] pgmap v145: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:20 vm06 bash[20625]: cluster 2026-03-08T23:08:19.675670+0000 mgr.y (mgr.24419) 220 : cluster [DBG] pgmap v145: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:08:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:08:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:08:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:20 vm06 bash[27746]: cluster 2026-03-08T23:08:19.675670+0000 mgr.y (mgr.24419) 220 : cluster [DBG] pgmap v145: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:20 vm06 bash[27746]: cluster 2026-03-08T23:08:19.675670+0000 mgr.y (mgr.24419) 220 : cluster [DBG] pgmap v145: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:20 vm11 bash[23232]: cluster 2026-03-08T23:08:19.675670+0000 mgr.y (mgr.24419) 220 : cluster [DBG] pgmap v145: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:21.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:20 vm11 bash[23232]: cluster 2026-03-08T23:08:19.675670+0000 mgr.y (mgr.24419) 220 : cluster [DBG] pgmap v145: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:22.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:08:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:22 vm06 bash[20625]: cluster 2026-03-08T23:08:21.676075+0000 mgr.y (mgr.24419) 221 : cluster [DBG] pgmap v146: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:22 vm06 bash[20625]: cluster 2026-03-08T23:08:21.676075+0000 mgr.y (mgr.24419) 221 : cluster [DBG] pgmap v146: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:22 vm06 bash[20625]: audit 2026-03-08T23:08:22.855835+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:22 vm06 bash[20625]: audit 2026-03-08T23:08:22.855835+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:22 vm06 bash[27746]: cluster 2026-03-08T23:08:21.676075+0000 mgr.y (mgr.24419) 221 : cluster [DBG] pgmap v146: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:22 vm06 bash[27746]: cluster 2026-03-08T23:08:21.676075+0000 mgr.y (mgr.24419) 221 : cluster [DBG] pgmap v146: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:22 vm06 bash[27746]: audit 2026-03-08T23:08:22.855835+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:23.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:22 vm06 bash[27746]: audit 2026-03-08T23:08:22.855835+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:23.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:22 vm11 bash[23232]: cluster 2026-03-08T23:08:21.676075+0000 mgr.y (mgr.24419) 221 : cluster [DBG] pgmap v146: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:23.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:22 vm11 bash[23232]: cluster 2026-03-08T23:08:21.676075+0000 mgr.y (mgr.24419) 221 : cluster [DBG] pgmap v146: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:23.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:22 vm11 bash[23232]: audit 2026-03-08T23:08:22.855835+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:23.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:22 vm11 bash[23232]: audit 2026-03-08T23:08:22.855835+0000 mon.c (mon.2) 120 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:23.469 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:23.651 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:23.651 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:23.651 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:23 vm06 bash[20625]: audit 2026-03-08T23:08:22.284064+0000 mgr.y (mgr.24419) 222 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:23 vm06 bash[20625]: audit 2026-03-08T23:08:22.284064+0000 mgr.y (mgr.24419) 222 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:23 vm06 bash[20625]: audit 2026-03-08T23:08:23.643075+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.106:0/1339397414' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:23 vm06 bash[20625]: audit 2026-03-08T23:08:23.643075+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.106:0/1339397414' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:23 vm06 bash[27746]: audit 2026-03-08T23:08:22.284064+0000 mgr.y (mgr.24419) 222 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:23 vm06 bash[27746]: audit 2026-03-08T23:08:22.284064+0000 mgr.y (mgr.24419) 222 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:23 vm06 bash[27746]: audit 2026-03-08T23:08:23.643075+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.106:0/1339397414' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:23 vm06 bash[27746]: audit 2026-03-08T23:08:23.643075+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.106:0/1339397414' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:24.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:23 vm11 bash[23232]: audit 2026-03-08T23:08:22.284064+0000 mgr.y (mgr.24419) 222 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:24.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:23 vm11 bash[23232]: audit 2026-03-08T23:08:22.284064+0000 mgr.y (mgr.24419) 222 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:24.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:23 vm11 bash[23232]: audit 2026-03-08T23:08:23.643075+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.106:0/1339397414' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:24.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:23 vm11 bash[23232]: audit 2026-03-08T23:08:23.643075+0000 mon.c (mon.2) 121 : audit [INF] from='client.? 192.168.123.106:0/1339397414' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:24 vm06 bash[20625]: cluster 2026-03-08T23:08:23.676319+0000 mgr.y (mgr.24419) 223 : cluster [DBG] pgmap v147: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:24 vm06 bash[20625]: cluster 2026-03-08T23:08:23.676319+0000 mgr.y (mgr.24419) 223 : cluster [DBG] pgmap v147: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:24 vm06 bash[27746]: cluster 2026-03-08T23:08:23.676319+0000 mgr.y (mgr.24419) 223 : cluster [DBG] pgmap v147: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:24 vm06 bash[27746]: cluster 2026-03-08T23:08:23.676319+0000 mgr.y (mgr.24419) 223 : cluster [DBG] pgmap v147: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:25.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:24 vm11 bash[23232]: cluster 2026-03-08T23:08:23.676319+0000 mgr.y (mgr.24419) 223 : cluster [DBG] pgmap v147: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:25.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:24 vm11 bash[23232]: cluster 2026-03-08T23:08:23.676319+0000 mgr.y (mgr.24419) 223 : cluster [DBG] pgmap v147: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:26 vm06 bash[20625]: cluster 2026-03-08T23:08:25.676740+0000 mgr.y (mgr.24419) 224 : cluster [DBG] pgmap v148: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:26 vm06 bash[20625]: cluster 2026-03-08T23:08:25.676740+0000 mgr.y (mgr.24419) 224 : cluster [DBG] pgmap v148: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:26 vm06 bash[27746]: cluster 2026-03-08T23:08:25.676740+0000 mgr.y (mgr.24419) 224 : cluster [DBG] pgmap v148: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:26 vm06 bash[27746]: cluster 2026-03-08T23:08:25.676740+0000 mgr.y (mgr.24419) 224 : cluster [DBG] pgmap v148: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:27.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:26 vm11 bash[23232]: cluster 2026-03-08T23:08:25.676740+0000 mgr.y (mgr.24419) 224 : cluster [DBG] pgmap v148: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:27.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:26 vm11 bash[23232]: cluster 2026-03-08T23:08:25.676740+0000 mgr.y (mgr.24419) 224 : cluster [DBG] pgmap v148: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:28.652 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:28.843 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:28.843 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:28.843 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:28 vm06 bash[20625]: cluster 2026-03-08T23:08:27.676984+0000 mgr.y (mgr.24419) 225 : cluster [DBG] pgmap v149: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:28 vm06 bash[20625]: cluster 2026-03-08T23:08:27.676984+0000 mgr.y (mgr.24419) 225 : cluster [DBG] pgmap v149: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:28 vm06 bash[20625]: audit 2026-03-08T23:08:28.835316+0000 mon.a (mon.0) 866 : audit [INF] from='client.? 192.168.123.106:0/3261025312' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:28 vm06 bash[20625]: audit 2026-03-08T23:08:28.835316+0000 mon.a (mon.0) 866 : audit [INF] from='client.? 192.168.123.106:0/3261025312' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:28 vm06 bash[27746]: cluster 2026-03-08T23:08:27.676984+0000 mgr.y (mgr.24419) 225 : cluster [DBG] pgmap v149: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:28 vm06 bash[27746]: cluster 2026-03-08T23:08:27.676984+0000 mgr.y (mgr.24419) 225 : cluster [DBG] pgmap v149: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:28 vm06 bash[27746]: audit 2026-03-08T23:08:28.835316+0000 mon.a (mon.0) 866 : audit [INF] from='client.? 192.168.123.106:0/3261025312' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:28 vm06 bash[27746]: audit 2026-03-08T23:08:28.835316+0000 mon.a (mon.0) 866 : audit [INF] from='client.? 192.168.123.106:0/3261025312' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:28 vm11 bash[23232]: cluster 2026-03-08T23:08:27.676984+0000 mgr.y (mgr.24419) 225 : cluster [DBG] pgmap v149: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:28 vm11 bash[23232]: cluster 2026-03-08T23:08:27.676984+0000 mgr.y (mgr.24419) 225 : cluster [DBG] pgmap v149: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:28 vm11 bash[23232]: audit 2026-03-08T23:08:28.835316+0000 mon.a (mon.0) 866 : audit [INF] from='client.? 192.168.123.106:0/3261025312' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:29.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:28 vm11 bash[23232]: audit 2026-03-08T23:08:28.835316+0000 mon.a (mon.0) 866 : audit [INF] from='client.? 192.168.123.106:0/3261025312' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:30 vm06 bash[20625]: cluster 2026-03-08T23:08:29.677243+0000 mgr.y (mgr.24419) 226 : cluster [DBG] pgmap v150: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:30 vm06 bash[20625]: cluster 2026-03-08T23:08:29.677243+0000 mgr.y (mgr.24419) 226 : cluster [DBG] pgmap v150: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:31.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:08:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:08:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:08:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:30 vm06 bash[27746]: cluster 2026-03-08T23:08:29.677243+0000 mgr.y (mgr.24419) 226 : cluster [DBG] pgmap v150: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:30 vm06 bash[27746]: cluster 2026-03-08T23:08:29.677243+0000 mgr.y (mgr.24419) 226 : cluster [DBG] pgmap v150: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:31.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:30 vm11 bash[23232]: cluster 2026-03-08T23:08:29.677243+0000 mgr.y (mgr.24419) 226 : cluster [DBG] pgmap v150: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:31.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:30 vm11 bash[23232]: cluster 2026-03-08T23:08:29.677243+0000 mgr.y (mgr.24419) 226 : cluster [DBG] pgmap v150: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:32.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:08:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:08:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:33 vm06 bash[20625]: cluster 2026-03-08T23:08:31.677700+0000 mgr.y (mgr.24419) 227 : cluster [DBG] pgmap v151: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:33 vm06 bash[20625]: cluster 2026-03-08T23:08:31.677700+0000 mgr.y (mgr.24419) 227 : cluster [DBG] pgmap v151: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:33 vm06 bash[27746]: cluster 2026-03-08T23:08:31.677700+0000 mgr.y (mgr.24419) 227 : cluster [DBG] pgmap v151: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:33 vm06 bash[27746]: cluster 2026-03-08T23:08:31.677700+0000 mgr.y (mgr.24419) 227 : cluster [DBG] pgmap v151: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:33.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:32 vm11 bash[23232]: cluster 2026-03-08T23:08:31.677700+0000 mgr.y (mgr.24419) 227 : cluster [DBG] pgmap v151: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:33.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:32 vm11 bash[23232]: cluster 2026-03-08T23:08:31.677700+0000 mgr.y (mgr.24419) 227 : cluster [DBG] pgmap v151: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:33.845 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:34.051 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== 2026-03-08T23:08:34.051 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== ']' 2026-03-08T23:08:34.051 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:34 vm11 bash[23232]: audit 2026-03-08T23:08:32.286813+0000 mgr.y (mgr.24419) 228 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:34 vm11 bash[23232]: audit 2026-03-08T23:08:32.286813+0000 mgr.y (mgr.24419) 228 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:34.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:34 vm06 bash[20625]: audit 2026-03-08T23:08:32.286813+0000 mgr.y (mgr.24419) 228 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:34.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:34 vm06 bash[20625]: audit 2026-03-08T23:08:32.286813+0000 mgr.y (mgr.24419) 228 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:34.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:34 vm06 bash[27746]: audit 2026-03-08T23:08:32.286813+0000 mgr.y (mgr.24419) 228 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:34.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:34 vm06 bash[27746]: audit 2026-03-08T23:08:32.286813+0000 mgr.y (mgr.24419) 228 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:35 vm06 bash[20625]: cluster 2026-03-08T23:08:33.678058+0000 mgr.y (mgr.24419) 229 : cluster [DBG] pgmap v152: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:35 vm06 bash[20625]: cluster 2026-03-08T23:08:33.678058+0000 mgr.y (mgr.24419) 229 : cluster [DBG] pgmap v152: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:35 vm06 bash[20625]: audit 2026-03-08T23:08:34.029351+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.106:0/2234073842' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:35 vm06 bash[20625]: audit 2026-03-08T23:08:34.029351+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.106:0/2234073842' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:35 vm06 bash[27746]: cluster 2026-03-08T23:08:33.678058+0000 mgr.y (mgr.24419) 229 : cluster [DBG] pgmap v152: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:35 vm06 bash[27746]: cluster 2026-03-08T23:08:33.678058+0000 mgr.y (mgr.24419) 229 : cluster [DBG] pgmap v152: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:35 vm06 bash[27746]: audit 2026-03-08T23:08:34.029351+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.106:0/2234073842' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:35.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:35 vm06 bash[27746]: audit 2026-03-08T23:08:34.029351+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.106:0/2234073842' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:35 vm11 bash[23232]: cluster 2026-03-08T23:08:33.678058+0000 mgr.y (mgr.24419) 229 : cluster [DBG] pgmap v152: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:35 vm11 bash[23232]: cluster 2026-03-08T23:08:33.678058+0000 mgr.y (mgr.24419) 229 : cluster [DBG] pgmap v152: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:35 vm11 bash[23232]: audit 2026-03-08T23:08:34.029351+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.106:0/2234073842' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:35.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:35 vm11 bash[23232]: audit 2026-03-08T23:08:34.029351+0000 mon.b (mon.1) 45 : audit [INF] from='client.? 192.168.123.106:0/2234073842' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:37 vm06 bash[20625]: cluster 2026-03-08T23:08:35.678472+0000 mgr.y (mgr.24419) 230 : cluster [DBG] pgmap v153: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:37.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:37 vm06 bash[20625]: cluster 2026-03-08T23:08:35.678472+0000 mgr.y (mgr.24419) 230 : cluster [DBG] pgmap v153: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:37 vm06 bash[27746]: cluster 2026-03-08T23:08:35.678472+0000 mgr.y (mgr.24419) 230 : cluster [DBG] pgmap v153: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:37.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:37 vm06 bash[27746]: cluster 2026-03-08T23:08:35.678472+0000 mgr.y (mgr.24419) 230 : cluster [DBG] pgmap v153: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:37 vm11 bash[23232]: cluster 2026-03-08T23:08:35.678472+0000 mgr.y (mgr.24419) 230 : cluster [DBG] pgmap v153: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:37.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:37 vm11 bash[23232]: cluster 2026-03-08T23:08:35.678472+0000 mgr.y (mgr.24419) 230 : cluster [DBG] pgmap v153: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:38.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:38 vm06 bash[20625]: audit 2026-03-08T23:08:37.861990+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:38.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:38 vm06 bash[20625]: audit 2026-03-08T23:08:37.861990+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:38.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:38 vm06 bash[27746]: audit 2026-03-08T23:08:37.861990+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:38.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:38 vm06 bash[27746]: audit 2026-03-08T23:08:37.861990+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:38.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:38 vm11 bash[23232]: audit 2026-03-08T23:08:37.861990+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:38.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:38 vm11 bash[23232]: audit 2026-03-08T23:08:37.861990+0000 mon.c (mon.2) 122 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:39.053 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.3 2026-03-08T23:08:39.246 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQBTAa5pbGhJDhAA2S7uoNmcNCcfNMp2jCVkww== 2026-03-08T23:08:39.246 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQBz/61pjxFbMxAAcKCrG39eVjspUy4thktQgA== == AQBTAa5pbGhJDhAA2S7uoNmcNCcfNMp2jCVkww== ']' 2026-03-08T23:08:39.247 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:08:39.247 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.4' 2026-03-08T23:08:39.247 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.4 2026-03-08T23:08:39.247 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:08:39.437 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:08:39.437 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:08:39.437 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.4 2026-03-08T23:08:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:39 vm06 bash[20625]: cluster 2026-03-08T23:08:37.678760+0000 mgr.y (mgr.24419) 231 : cluster [DBG] pgmap v154: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:39.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:39 vm06 bash[20625]: cluster 2026-03-08T23:08:37.678760+0000 mgr.y (mgr.24419) 231 : cluster [DBG] pgmap v154: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:39 vm06 bash[27746]: cluster 2026-03-08T23:08:37.678760+0000 mgr.y (mgr.24419) 231 : cluster [DBG] pgmap v154: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:39.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:39 vm06 bash[27746]: cluster 2026-03-08T23:08:37.678760+0000 mgr.y (mgr.24419) 231 : cluster [DBG] pgmap v154: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:39.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:39 vm11 bash[23232]: cluster 2026-03-08T23:08:37.678760+0000 mgr.y (mgr.24419) 231 : cluster [DBG] pgmap v154: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:39.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:39 vm11 bash[23232]: cluster 2026-03-08T23:08:37.678760+0000 mgr.y (mgr.24419) 231 : cluster [DBG] pgmap v154: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:39.623 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.4 on host 'vm11' 2026-03-08T23:08:39.640 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== ']' 2026-03-08T23:08:39.640 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.237407+0000 mon.a (mon.0) 867 : audit [INF] from='client.? 192.168.123.106:0/1840949822' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.237407+0000 mon.a (mon.0) 867 : audit [INF] from='client.? 192.168.123.106:0/1840949822' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.429154+0000 mon.a (mon.0) 868 : audit [INF] from='client.? 192.168.123.106:0/2092112126' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.429154+0000 mon.a (mon.0) 868 : audit [INF] from='client.? 192.168.123.106:0/2092112126' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.605743+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.605743+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.622224+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.622224+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.624327+0000 mon.c (mon.2) 123 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.624327+0000 mon.c (mon.2) 123 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.627745+0000 mon.c (mon.2) 124 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.627745+0000 mon.c (mon.2) 124 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.628453+0000 mon.c (mon.2) 125 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.628453+0000 mon.c (mon.2) 125 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.639013+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.639013+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.652591+0000 mon.c (mon.2) 126 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.652591+0000 mon.c (mon.2) 126 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.653299+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.653299+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.660507+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]': finished 2026-03-08T23:08:40.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:40 vm11 bash[23232]: audit 2026-03-08T23:08:39.660507+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]': finished 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.237407+0000 mon.a (mon.0) 867 : audit [INF] from='client.? 192.168.123.106:0/1840949822' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.237407+0000 mon.a (mon.0) 867 : audit [INF] from='client.? 192.168.123.106:0/1840949822' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.429154+0000 mon.a (mon.0) 868 : audit [INF] from='client.? 192.168.123.106:0/2092112126' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.429154+0000 mon.a (mon.0) 868 : audit [INF] from='client.? 192.168.123.106:0/2092112126' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.605743+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.605743+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.622224+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.622224+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.624327+0000 mon.c (mon.2) 123 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.624327+0000 mon.c (mon.2) 123 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.627745+0000 mon.c (mon.2) 124 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.627745+0000 mon.c (mon.2) 124 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.628453+0000 mon.c (mon.2) 125 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.628453+0000 mon.c (mon.2) 125 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.639013+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.639013+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.652591+0000 mon.c (mon.2) 126 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.652591+0000 mon.c (mon.2) 126 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.653299+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.653299+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.660507+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]': finished 2026-03-08T23:08:40.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:40 vm06 bash[20625]: audit 2026-03-08T23:08:39.660507+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]': finished 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.237407+0000 mon.a (mon.0) 867 : audit [INF] from='client.? 192.168.123.106:0/1840949822' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.237407+0000 mon.a (mon.0) 867 : audit [INF] from='client.? 192.168.123.106:0/1840949822' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.3"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.429154+0000 mon.a (mon.0) 868 : audit [INF] from='client.? 192.168.123.106:0/2092112126' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.429154+0000 mon.a (mon.0) 868 : audit [INF] from='client.? 192.168.123.106:0/2092112126' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.605743+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.605743+0000 mon.a (mon.0) 869 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.622224+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.622224+0000 mon.a (mon.0) 870 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.624327+0000 mon.c (mon.2) 123 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.624327+0000 mon.c (mon.2) 123 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.627745+0000 mon.c (mon.2) 124 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.627745+0000 mon.c (mon.2) 124 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.628453+0000 mon.c (mon.2) 125 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.628453+0000 mon.c (mon.2) 125 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.639013+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.639013+0000 mon.a (mon.0) 871 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.652591+0000 mon.c (mon.2) 126 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.652591+0000 mon.c (mon.2) 126 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.653299+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.653299+0000 mon.a (mon.0) 872 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]: dispatch 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.660507+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]': finished 2026-03-08T23:08:40.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:40 vm06 bash[27746]: audit 2026-03-08T23:08:39.660507+0000 mon.a (mon.0) 873 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.4", "format": "json"}]': finished 2026-03-08T23:08:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:08:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:08:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:39.598336+0000 mgr.y (mgr.24419) 232 : audit [DBG] from='client.24866 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:39.598336+0000 mgr.y (mgr.24419) 232 : audit [DBG] from='client.24866 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cephadm 2026-03-08T23:08:39.598756+0000 mgr.y (mgr.24419) 233 : cephadm [INF] Schedule rotate-key daemon osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cephadm 2026-03-08T23:08:39.598756+0000 mgr.y (mgr.24419) 233 : cephadm [INF] Schedule rotate-key daemon osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cephadm 2026-03-08T23:08:39.652264+0000 mgr.y (mgr.24419) 234 : cephadm [INF] Rotating authentication key for osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cephadm 2026-03-08T23:08:39.652264+0000 mgr.y (mgr.24419) 234 : cephadm [INF] Rotating authentication key for osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cephadm 2026-03-08T23:08:39.664956+0000 mgr.y (mgr.24419) 235 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cephadm 2026-03-08T23:08:39.664956+0000 mgr.y (mgr.24419) 235 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cluster 2026-03-08T23:08:39.679037+0000 mgr.y (mgr.24419) 236 : cluster [DBG] pgmap v155: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: cluster 2026-03-08T23:08:39.679037+0000 mgr.y (mgr.24419) 236 : cluster [DBG] pgmap v155: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.173458+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.173458+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.199736+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.199736+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.414547+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.414547+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.455971+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:41 vm06 bash[20625]: audit 2026-03-08T23:08:40.455971+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:39.598336+0000 mgr.y (mgr.24419) 232 : audit [DBG] from='client.24866 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:39.598336+0000 mgr.y (mgr.24419) 232 : audit [DBG] from='client.24866 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cephadm 2026-03-08T23:08:39.598756+0000 mgr.y (mgr.24419) 233 : cephadm [INF] Schedule rotate-key daemon osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cephadm 2026-03-08T23:08:39.598756+0000 mgr.y (mgr.24419) 233 : cephadm [INF] Schedule rotate-key daemon osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cephadm 2026-03-08T23:08:39.652264+0000 mgr.y (mgr.24419) 234 : cephadm [INF] Rotating authentication key for osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cephadm 2026-03-08T23:08:39.652264+0000 mgr.y (mgr.24419) 234 : cephadm [INF] Rotating authentication key for osd.4 2026-03-08T23:08:41.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cephadm 2026-03-08T23:08:39.664956+0000 mgr.y (mgr.24419) 235 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cephadm 2026-03-08T23:08:39.664956+0000 mgr.y (mgr.24419) 235 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cluster 2026-03-08T23:08:39.679037+0000 mgr.y (mgr.24419) 236 : cluster [DBG] pgmap v155: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: cluster 2026-03-08T23:08:39.679037+0000 mgr.y (mgr.24419) 236 : cluster [DBG] pgmap v155: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.173458+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.173458+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.199736+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.199736+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.414547+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.414547+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.455971+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:41 vm06 bash[27746]: audit 2026-03-08T23:08:40.455971+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:39.598336+0000 mgr.y (mgr.24419) 232 : audit [DBG] from='client.24866 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:39.598336+0000 mgr.y (mgr.24419) 232 : audit [DBG] from='client.24866 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.4", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cephadm 2026-03-08T23:08:39.598756+0000 mgr.y (mgr.24419) 233 : cephadm [INF] Schedule rotate-key daemon osd.4 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cephadm 2026-03-08T23:08:39.598756+0000 mgr.y (mgr.24419) 233 : cephadm [INF] Schedule rotate-key daemon osd.4 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cephadm 2026-03-08T23:08:39.652264+0000 mgr.y (mgr.24419) 234 : cephadm [INF] Rotating authentication key for osd.4 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cephadm 2026-03-08T23:08:39.652264+0000 mgr.y (mgr.24419) 234 : cephadm [INF] Rotating authentication key for osd.4 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cephadm 2026-03-08T23:08:39.664956+0000 mgr.y (mgr.24419) 235 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cephadm 2026-03-08T23:08:39.664956+0000 mgr.y (mgr.24419) 235 : cephadm [INF] Reconfiguring daemon osd.4 on vm11 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cluster 2026-03-08T23:08:39.679037+0000 mgr.y (mgr.24419) 236 : cluster [DBG] pgmap v155: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: cluster 2026-03-08T23:08:39.679037+0000 mgr.y (mgr.24419) 236 : cluster [DBG] pgmap v155: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.173458+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.173458+0000 mon.a (mon.0) 874 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.199736+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.199736+0000 mon.a (mon.0) 875 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.414547+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.414547+0000 mon.a (mon.0) 876 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.455971+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:41.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:41 vm11 bash[23232]: audit 2026-03-08T23:08:40.455971+0000 mon.a (mon.0) 877 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:08:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:42 vm06 bash[20625]: cluster 2026-03-08T23:08:41.679751+0000 mgr.y (mgr.24419) 237 : cluster [DBG] pgmap v156: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:42.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:42 vm06 bash[20625]: cluster 2026-03-08T23:08:41.679751+0000 mgr.y (mgr.24419) 237 : cluster [DBG] pgmap v156: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:42.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:42 vm06 bash[27746]: cluster 2026-03-08T23:08:41.679751+0000 mgr.y (mgr.24419) 237 : cluster [DBG] pgmap v156: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:42.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:42 vm06 bash[27746]: cluster 2026-03-08T23:08:41.679751+0000 mgr.y (mgr.24419) 237 : cluster [DBG] pgmap v156: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:42.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:42 vm11 bash[23232]: cluster 2026-03-08T23:08:41.679751+0000 mgr.y (mgr.24419) 237 : cluster [DBG] pgmap v156: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:42.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:42 vm11 bash[23232]: cluster 2026-03-08T23:08:41.679751+0000 mgr.y (mgr.24419) 237 : cluster [DBG] pgmap v156: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:42.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:08:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:08:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:43 vm06 bash[20625]: audit 2026-03-08T23:08:42.297632+0000 mgr.y (mgr.24419) 238 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:43.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:43 vm06 bash[20625]: audit 2026-03-08T23:08:42.297632+0000 mgr.y (mgr.24419) 238 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:43 vm06 bash[27746]: audit 2026-03-08T23:08:42.297632+0000 mgr.y (mgr.24419) 238 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:43.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:43 vm06 bash[27746]: audit 2026-03-08T23:08:42.297632+0000 mgr.y (mgr.24419) 238 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:43.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:43 vm11 bash[23232]: audit 2026-03-08T23:08:42.297632+0000 mgr.y (mgr.24419) 238 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:43.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:43 vm11 bash[23232]: audit 2026-03-08T23:08:42.297632+0000 mgr.y (mgr.24419) 238 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:44 vm06 bash[20625]: cluster 2026-03-08T23:08:43.680044+0000 mgr.y (mgr.24419) 239 : cluster [DBG] pgmap v157: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:44.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:44 vm06 bash[20625]: cluster 2026-03-08T23:08:43.680044+0000 mgr.y (mgr.24419) 239 : cluster [DBG] pgmap v157: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:44.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:44 vm06 bash[27746]: cluster 2026-03-08T23:08:43.680044+0000 mgr.y (mgr.24419) 239 : cluster [DBG] pgmap v157: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:44.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:44 vm06 bash[27746]: cluster 2026-03-08T23:08:43.680044+0000 mgr.y (mgr.24419) 239 : cluster [DBG] pgmap v157: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:44.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:44 vm11 bash[23232]: cluster 2026-03-08T23:08:43.680044+0000 mgr.y (mgr.24419) 239 : cluster [DBG] pgmap v157: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:44.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:44 vm11 bash[23232]: cluster 2026-03-08T23:08:43.680044+0000 mgr.y (mgr.24419) 239 : cluster [DBG] pgmap v157: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:44.642 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:08:44.840 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:08:44.841 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== ']' 2026-03-08T23:08:44.841 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:45 vm06 bash[20625]: audit 2026-03-08T23:08:44.832004+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.106:0/3800204063' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:45.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:45 vm06 bash[20625]: audit 2026-03-08T23:08:44.832004+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.106:0/3800204063' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:45.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:45 vm06 bash[27746]: audit 2026-03-08T23:08:44.832004+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.106:0/3800204063' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:45.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:45 vm06 bash[27746]: audit 2026-03-08T23:08:44.832004+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.106:0/3800204063' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:45.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:45 vm11 bash[23232]: audit 2026-03-08T23:08:44.832004+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.106:0/3800204063' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:45.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:45 vm11 bash[23232]: audit 2026-03-08T23:08:44.832004+0000 mon.a (mon.0) 878 : audit [INF] from='client.? 192.168.123.106:0/3800204063' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:46.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:46 vm06 bash[20625]: cluster 2026-03-08T23:08:45.680552+0000 mgr.y (mgr.24419) 240 : cluster [DBG] pgmap v158: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:46.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:46 vm06 bash[20625]: cluster 2026-03-08T23:08:45.680552+0000 mgr.y (mgr.24419) 240 : cluster [DBG] pgmap v158: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:46.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:46 vm06 bash[27746]: cluster 2026-03-08T23:08:45.680552+0000 mgr.y (mgr.24419) 240 : cluster [DBG] pgmap v158: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:46.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:46 vm06 bash[27746]: cluster 2026-03-08T23:08:45.680552+0000 mgr.y (mgr.24419) 240 : cluster [DBG] pgmap v158: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:46 vm11 bash[23232]: cluster 2026-03-08T23:08:45.680552+0000 mgr.y (mgr.24419) 240 : cluster [DBG] pgmap v158: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:46.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:46 vm11 bash[23232]: cluster 2026-03-08T23:08:45.680552+0000 mgr.y (mgr.24419) 240 : cluster [DBG] pgmap v158: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:49.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:48 vm06 bash[20625]: cluster 2026-03-08T23:08:47.680826+0000 mgr.y (mgr.24419) 241 : cluster [DBG] pgmap v159: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:49.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:48 vm06 bash[20625]: cluster 2026-03-08T23:08:47.680826+0000 mgr.y (mgr.24419) 241 : cluster [DBG] pgmap v159: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:49.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:48 vm06 bash[27746]: cluster 2026-03-08T23:08:47.680826+0000 mgr.y (mgr.24419) 241 : cluster [DBG] pgmap v159: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:49.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:48 vm06 bash[27746]: cluster 2026-03-08T23:08:47.680826+0000 mgr.y (mgr.24419) 241 : cluster [DBG] pgmap v159: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:49.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:48 vm11 bash[23232]: cluster 2026-03-08T23:08:47.680826+0000 mgr.y (mgr.24419) 241 : cluster [DBG] pgmap v159: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:49.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:48 vm11 bash[23232]: cluster 2026-03-08T23:08:47.680826+0000 mgr.y (mgr.24419) 241 : cluster [DBG] pgmap v159: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:49.842 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:08:50.034 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:08:50.034 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== ']' 2026-03-08T23:08:50.034 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:50 vm06 bash[20625]: cluster 2026-03-08T23:08:49.681130+0000 mgr.y (mgr.24419) 242 : cluster [DBG] pgmap v160: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:50 vm06 bash[20625]: cluster 2026-03-08T23:08:49.681130+0000 mgr.y (mgr.24419) 242 : cluster [DBG] pgmap v160: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:50 vm06 bash[20625]: audit 2026-03-08T23:08:50.024946+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.106:0/742375109' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:50 vm06 bash[20625]: audit 2026-03-08T23:08:50.024946+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.106:0/742375109' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:08:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:08:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:50 vm06 bash[27746]: cluster 2026-03-08T23:08:49.681130+0000 mgr.y (mgr.24419) 242 : cluster [DBG] pgmap v160: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:50 vm06 bash[27746]: cluster 2026-03-08T23:08:49.681130+0000 mgr.y (mgr.24419) 242 : cluster [DBG] pgmap v160: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:50 vm06 bash[27746]: audit 2026-03-08T23:08:50.024946+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.106:0/742375109' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:51.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:50 vm06 bash[27746]: audit 2026-03-08T23:08:50.024946+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.106:0/742375109' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:51.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:50 vm11 bash[23232]: cluster 2026-03-08T23:08:49.681130+0000 mgr.y (mgr.24419) 242 : cluster [DBG] pgmap v160: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:51.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:50 vm11 bash[23232]: cluster 2026-03-08T23:08:49.681130+0000 mgr.y (mgr.24419) 242 : cluster [DBG] pgmap v160: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:51.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:50 vm11 bash[23232]: audit 2026-03-08T23:08:50.024946+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.106:0/742375109' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:51.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:50 vm11 bash[23232]: audit 2026-03-08T23:08:50.024946+0000 mon.a (mon.0) 879 : audit [INF] from='client.? 192.168.123.106:0/742375109' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:52 vm06 bash[20625]: cluster 2026-03-08T23:08:51.681530+0000 mgr.y (mgr.24419) 243 : cluster [DBG] pgmap v161: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:52.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:52 vm06 bash[20625]: cluster 2026-03-08T23:08:51.681530+0000 mgr.y (mgr.24419) 243 : cluster [DBG] pgmap v161: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:52.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:52 vm06 bash[27746]: cluster 2026-03-08T23:08:51.681530+0000 mgr.y (mgr.24419) 243 : cluster [DBG] pgmap v161: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:52.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:52 vm06 bash[27746]: cluster 2026-03-08T23:08:51.681530+0000 mgr.y (mgr.24419) 243 : cluster [DBG] pgmap v161: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:52 vm11 bash[23232]: cluster 2026-03-08T23:08:51.681530+0000 mgr.y (mgr.24419) 243 : cluster [DBG] pgmap v161: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:52.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:52 vm11 bash[23232]: cluster 2026-03-08T23:08:51.681530+0000 mgr.y (mgr.24419) 243 : cluster [DBG] pgmap v161: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:52.558 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:08:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:53 vm06 bash[20625]: audit 2026-03-08T23:08:52.308403+0000 mgr.y (mgr.24419) 244 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:53 vm06 bash[20625]: audit 2026-03-08T23:08:52.308403+0000 mgr.y (mgr.24419) 244 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:53 vm06 bash[20625]: audit 2026-03-08T23:08:52.867903+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:53 vm06 bash[20625]: audit 2026-03-08T23:08:52.867903+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:53 vm06 bash[27746]: audit 2026-03-08T23:08:52.308403+0000 mgr.y (mgr.24419) 244 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:53 vm06 bash[27746]: audit 2026-03-08T23:08:52.308403+0000 mgr.y (mgr.24419) 244 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:53 vm06 bash[27746]: audit 2026-03-08T23:08:52.867903+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:53 vm06 bash[27746]: audit 2026-03-08T23:08:52.867903+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:53 vm11 bash[23232]: audit 2026-03-08T23:08:52.308403+0000 mgr.y (mgr.24419) 244 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:53 vm11 bash[23232]: audit 2026-03-08T23:08:52.308403+0000 mgr.y (mgr.24419) 244 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:08:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:53 vm11 bash[23232]: audit 2026-03-08T23:08:52.867903+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:53 vm11 bash[23232]: audit 2026-03-08T23:08:52.867903+0000 mon.c (mon.2) 127 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:08:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:54 vm06 bash[20625]: cluster 2026-03-08T23:08:53.681795+0000 mgr.y (mgr.24419) 245 : cluster [DBG] pgmap v162: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:54 vm06 bash[20625]: cluster 2026-03-08T23:08:53.681795+0000 mgr.y (mgr.24419) 245 : cluster [DBG] pgmap v162: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:54 vm06 bash[27746]: cluster 2026-03-08T23:08:53.681795+0000 mgr.y (mgr.24419) 245 : cluster [DBG] pgmap v162: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:54 vm06 bash[27746]: cluster 2026-03-08T23:08:53.681795+0000 mgr.y (mgr.24419) 245 : cluster [DBG] pgmap v162: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:54 vm11 bash[23232]: cluster 2026-03-08T23:08:53.681795+0000 mgr.y (mgr.24419) 245 : cluster [DBG] pgmap v162: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:54 vm11 bash[23232]: cluster 2026-03-08T23:08:53.681795+0000 mgr.y (mgr.24419) 245 : cluster [DBG] pgmap v162: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:55.035 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:08:55.236 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:08:55.236 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== ']' 2026-03-08T23:08:55.236 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:08:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:55 vm06 bash[20625]: audit 2026-03-08T23:08:55.227954+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.106:0/2327595718' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:55 vm06 bash[20625]: audit 2026-03-08T23:08:55.227954+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.106:0/2327595718' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:55 vm06 bash[27746]: audit 2026-03-08T23:08:55.227954+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.106:0/2327595718' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:55 vm06 bash[27746]: audit 2026-03-08T23:08:55.227954+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.106:0/2327595718' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:55 vm11 bash[23232]: audit 2026-03-08T23:08:55.227954+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.106:0/2327595718' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:55 vm11 bash[23232]: audit 2026-03-08T23:08:55.227954+0000 mon.c (mon.2) 128 : audit [INF] from='client.? 192.168.123.106:0/2327595718' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:08:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:56 vm06 bash[20625]: cluster 2026-03-08T23:08:55.682367+0000 mgr.y (mgr.24419) 246 : cluster [DBG] pgmap v163: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:56 vm06 bash[20625]: cluster 2026-03-08T23:08:55.682367+0000 mgr.y (mgr.24419) 246 : cluster [DBG] pgmap v163: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:56 vm06 bash[27746]: cluster 2026-03-08T23:08:55.682367+0000 mgr.y (mgr.24419) 246 : cluster [DBG] pgmap v163: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:56 vm06 bash[27746]: cluster 2026-03-08T23:08:55.682367+0000 mgr.y (mgr.24419) 246 : cluster [DBG] pgmap v163: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:56 vm11 bash[23232]: cluster 2026-03-08T23:08:55.682367+0000 mgr.y (mgr.24419) 246 : cluster [DBG] pgmap v163: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:56 vm11 bash[23232]: cluster 2026-03-08T23:08:55.682367+0000 mgr.y (mgr.24419) 246 : cluster [DBG] pgmap v163: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:08:59.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:58 vm06 bash[20625]: cluster 2026-03-08T23:08:57.682639+0000 mgr.y (mgr.24419) 247 : cluster [DBG] pgmap v164: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:59.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:08:58 vm06 bash[20625]: cluster 2026-03-08T23:08:57.682639+0000 mgr.y (mgr.24419) 247 : cluster [DBG] pgmap v164: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:59.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:58 vm06 bash[27746]: cluster 2026-03-08T23:08:57.682639+0000 mgr.y (mgr.24419) 247 : cluster [DBG] pgmap v164: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:59.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:08:58 vm06 bash[27746]: cluster 2026-03-08T23:08:57.682639+0000 mgr.y (mgr.24419) 247 : cluster [DBG] pgmap v164: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:59.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:58 vm11 bash[23232]: cluster 2026-03-08T23:08:57.682639+0000 mgr.y (mgr.24419) 247 : cluster [DBG] pgmap v164: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:08:59.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:08:58 vm11 bash[23232]: cluster 2026-03-08T23:08:57.682639+0000 mgr.y (mgr.24419) 247 : cluster [DBG] pgmap v164: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:00.239 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:09:00.431 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:09:00.431 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== ']' 2026-03-08T23:09:00.431 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:09:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:09:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:00 vm06 bash[27746]: cluster 2026-03-08T23:08:59.682906+0000 mgr.y (mgr.24419) 248 : cluster [DBG] pgmap v165: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:00 vm06 bash[27746]: cluster 2026-03-08T23:08:59.682906+0000 mgr.y (mgr.24419) 248 : cluster [DBG] pgmap v165: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:00 vm06 bash[27746]: audit 2026-03-08T23:09:00.422934+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.106:0/4120345812' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:00 vm06 bash[27746]: audit 2026-03-08T23:09:00.422934+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.106:0/4120345812' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:00 vm06 bash[20625]: cluster 2026-03-08T23:08:59.682906+0000 mgr.y (mgr.24419) 248 : cluster [DBG] pgmap v165: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:00 vm06 bash[20625]: cluster 2026-03-08T23:08:59.682906+0000 mgr.y (mgr.24419) 248 : cluster [DBG] pgmap v165: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:00 vm06 bash[20625]: audit 2026-03-08T23:09:00.422934+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.106:0/4120345812' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:00 vm06 bash[20625]: audit 2026-03-08T23:09:00.422934+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.106:0/4120345812' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:01.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:00 vm11 bash[23232]: cluster 2026-03-08T23:08:59.682906+0000 mgr.y (mgr.24419) 248 : cluster [DBG] pgmap v165: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:01.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:00 vm11 bash[23232]: cluster 2026-03-08T23:08:59.682906+0000 mgr.y (mgr.24419) 248 : cluster [DBG] pgmap v165: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:01.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:00 vm11 bash[23232]: audit 2026-03-08T23:09:00.422934+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.106:0/4120345812' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:01.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:00 vm11 bash[23232]: audit 2026-03-08T23:09:00.422934+0000 mon.c (mon.2) 129 : audit [INF] from='client.? 192.168.123.106:0/4120345812' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:02.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:09:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:09:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:02 vm06 bash[20625]: cluster 2026-03-08T23:09:01.683321+0000 mgr.y (mgr.24419) 249 : cluster [DBG] pgmap v166: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:02 vm06 bash[20625]: cluster 2026-03-08T23:09:01.683321+0000 mgr.y (mgr.24419) 249 : cluster [DBG] pgmap v166: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:02 vm06 bash[27746]: cluster 2026-03-08T23:09:01.683321+0000 mgr.y (mgr.24419) 249 : cluster [DBG] pgmap v166: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:02 vm06 bash[27746]: cluster 2026-03-08T23:09:01.683321+0000 mgr.y (mgr.24419) 249 : cluster [DBG] pgmap v166: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:03.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:02 vm11 bash[23232]: cluster 2026-03-08T23:09:01.683321+0000 mgr.y (mgr.24419) 249 : cluster [DBG] pgmap v166: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:03.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:02 vm11 bash[23232]: cluster 2026-03-08T23:09:01.683321+0000 mgr.y (mgr.24419) 249 : cluster [DBG] pgmap v166: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:03 vm06 bash[20625]: audit 2026-03-08T23:09:02.317745+0000 mgr.y (mgr.24419) 250 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:03 vm06 bash[20625]: audit 2026-03-08T23:09:02.317745+0000 mgr.y (mgr.24419) 250 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:03 vm06 bash[27746]: audit 2026-03-08T23:09:02.317745+0000 mgr.y (mgr.24419) 250 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:03 vm06 bash[27746]: audit 2026-03-08T23:09:02.317745+0000 mgr.y (mgr.24419) 250 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:03 vm11 bash[23232]: audit 2026-03-08T23:09:02.317745+0000 mgr.y (mgr.24419) 250 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:03 vm11 bash[23232]: audit 2026-03-08T23:09:02.317745+0000 mgr.y (mgr.24419) 250 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:04 vm06 bash[20625]: cluster 2026-03-08T23:09:03.683615+0000 mgr.y (mgr.24419) 251 : cluster [DBG] pgmap v167: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:04 vm06 bash[20625]: cluster 2026-03-08T23:09:03.683615+0000 mgr.y (mgr.24419) 251 : cluster [DBG] pgmap v167: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:04 vm06 bash[27746]: cluster 2026-03-08T23:09:03.683615+0000 mgr.y (mgr.24419) 251 : cluster [DBG] pgmap v167: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:04 vm06 bash[27746]: cluster 2026-03-08T23:09:03.683615+0000 mgr.y (mgr.24419) 251 : cluster [DBG] pgmap v167: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:04 vm11 bash[23232]: cluster 2026-03-08T23:09:03.683615+0000 mgr.y (mgr.24419) 251 : cluster [DBG] pgmap v167: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:04 vm11 bash[23232]: cluster 2026-03-08T23:09:03.683615+0000 mgr.y (mgr.24419) 251 : cluster [DBG] pgmap v167: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:05.433 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:09:05.615 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== 2026-03-08T23:09:05.616 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== ']' 2026-03-08T23:09:05.616 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:06.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:05 vm06 bash[20625]: audit 2026-03-08T23:09:05.607410+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.106:0/4058802871' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:06.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:05 vm06 bash[20625]: audit 2026-03-08T23:09:05.607410+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.106:0/4058802871' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:06.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:05 vm06 bash[27746]: audit 2026-03-08T23:09:05.607410+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.106:0/4058802871' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:06.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:05 vm06 bash[27746]: audit 2026-03-08T23:09:05.607410+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.106:0/4058802871' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:06.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:05 vm11 bash[23232]: audit 2026-03-08T23:09:05.607410+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.106:0/4058802871' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:06.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:05 vm11 bash[23232]: audit 2026-03-08T23:09:05.607410+0000 mon.c (mon.2) 130 : audit [INF] from='client.? 192.168.123.106:0/4058802871' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:07.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:06 vm06 bash[20625]: cluster 2026-03-08T23:09:05.684180+0000 mgr.y (mgr.24419) 252 : cluster [DBG] pgmap v168: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:07.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:06 vm06 bash[20625]: cluster 2026-03-08T23:09:05.684180+0000 mgr.y (mgr.24419) 252 : cluster [DBG] pgmap v168: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:07.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:06 vm06 bash[27746]: cluster 2026-03-08T23:09:05.684180+0000 mgr.y (mgr.24419) 252 : cluster [DBG] pgmap v168: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:07.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:06 vm06 bash[27746]: cluster 2026-03-08T23:09:05.684180+0000 mgr.y (mgr.24419) 252 : cluster [DBG] pgmap v168: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:07.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:06 vm11 bash[23232]: cluster 2026-03-08T23:09:05.684180+0000 mgr.y (mgr.24419) 252 : cluster [DBG] pgmap v168: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:07.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:06 vm11 bash[23232]: cluster 2026-03-08T23:09:05.684180+0000 mgr.y (mgr.24419) 252 : cluster [DBG] pgmap v168: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:08.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:07 vm06 bash[20625]: audit 2026-03-08T23:09:07.873654+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:08.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:07 vm06 bash[20625]: audit 2026-03-08T23:09:07.873654+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:08.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:07 vm06 bash[27746]: audit 2026-03-08T23:09:07.873654+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:08.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:07 vm06 bash[27746]: audit 2026-03-08T23:09:07.873654+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:08.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:07 vm11 bash[23232]: audit 2026-03-08T23:09:07.873654+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:08.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:07 vm11 bash[23232]: audit 2026-03-08T23:09:07.873654+0000 mon.c (mon.2) 131 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:09 vm06 bash[20625]: cluster 2026-03-08T23:09:07.684430+0000 mgr.y (mgr.24419) 253 : cluster [DBG] pgmap v169: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:09.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:09 vm06 bash[20625]: cluster 2026-03-08T23:09:07.684430+0000 mgr.y (mgr.24419) 253 : cluster [DBG] pgmap v169: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:09 vm06 bash[27746]: cluster 2026-03-08T23:09:07.684430+0000 mgr.y (mgr.24419) 253 : cluster [DBG] pgmap v169: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:09.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:09 vm06 bash[27746]: cluster 2026-03-08T23:09:07.684430+0000 mgr.y (mgr.24419) 253 : cluster [DBG] pgmap v169: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:09.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:09 vm11 bash[23232]: cluster 2026-03-08T23:09:07.684430+0000 mgr.y (mgr.24419) 253 : cluster [DBG] pgmap v169: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:09.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:09 vm11 bash[23232]: cluster 2026-03-08T23:09:07.684430+0000 mgr.y (mgr.24419) 253 : cluster [DBG] pgmap v169: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:10.617 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.4 2026-03-08T23:09:10.802 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQB3Aa5pqzPyJhAAel/4r9Ju9usysUxEdypaDQ== 2026-03-08T23:09:10.802 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCV/61pHgqLLhAASzuP93yFB1njvWTYZFqtDw== == AQB3Aa5pqzPyJhAAel/4r9Ju9usysUxEdypaDQ== ']' 2026-03-08T23:09:10.802 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:09:10.802 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.5' 2026-03-08T23:09:10.802 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.5 2026-03-08T23:09:10.802 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:10.982 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:10.982 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:10.982 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.5 2026-03-08T23:09:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:09:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:09:10] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:09:11.152 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.5 on host 'vm11' 2026-03-08T23:09:11.179 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:11.179 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: cluster 2026-03-08T23:09:09.684695+0000 mgr.y (mgr.24419) 254 : cluster [DBG] pgmap v170: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: cluster 2026-03-08T23:09:09.684695+0000 mgr.y (mgr.24419) 254 : cluster [DBG] pgmap v170: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:10.794088+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.106:0/254465003' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:10.794088+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.106:0/254465003' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:10.974797+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.106:0/3641450117' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:10.974797+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.106:0/3641450117' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:11.141763+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:11.141763+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:11.148821+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:11.148821+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:11.151969+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:11 vm06 bash[20625]: audit 2026-03-08T23:09:11.151969+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: cluster 2026-03-08T23:09:09.684695+0000 mgr.y (mgr.24419) 254 : cluster [DBG] pgmap v170: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: cluster 2026-03-08T23:09:09.684695+0000 mgr.y (mgr.24419) 254 : cluster [DBG] pgmap v170: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:10.794088+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.106:0/254465003' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:10.794088+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.106:0/254465003' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:10.974797+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.106:0/3641450117' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:10.974797+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.106:0/3641450117' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:11.141763+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:11.141763+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:11.148821+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:11.148821+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:11.151969+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:11.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:11 vm06 bash[27746]: audit 2026-03-08T23:09:11.151969+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: cluster 2026-03-08T23:09:09.684695+0000 mgr.y (mgr.24419) 254 : cluster [DBG] pgmap v170: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: cluster 2026-03-08T23:09:09.684695+0000 mgr.y (mgr.24419) 254 : cluster [DBG] pgmap v170: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:10.794088+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.106:0/254465003' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:10.794088+0000 mon.c (mon.2) 132 : audit [INF] from='client.? 192.168.123.106:0/254465003' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.4"}]: dispatch 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:10.974797+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.106:0/3641450117' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:10.974797+0000 mon.a (mon.0) 880 : audit [INF] from='client.? 192.168.123.106:0/3641450117' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:11.141763+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:11.141763+0000 mon.a (mon.0) 881 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:11.148821+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:11.148821+0000 mon.a (mon.0) 882 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:11.151969+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:11 vm11 bash[23232]: audit 2026-03-08T23:09:11.151969+0000 mon.c (mon.2) 133 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.129278+0000 mgr.y (mgr.24419) 255 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.5", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.129278+0000 mgr.y (mgr.24419) 255 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.5", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: cephadm 2026-03-08T23:09:11.129702+0000 mgr.y (mgr.24419) 256 : cephadm [INF] Schedule rotate-key daemon osd.5 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: cephadm 2026-03-08T23:09:11.129702+0000 mgr.y (mgr.24419) 256 : cephadm [INF] Schedule rotate-key daemon osd.5 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.472042+0000 mon.c (mon.2) 134 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.472042+0000 mon.c (mon.2) 134 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.473357+0000 mon.c (mon.2) 135 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.473357+0000 mon.c (mon.2) 135 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.481968+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.481968+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.500026+0000 mon.c (mon.2) 136 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.500026+0000 mon.c (mon.2) 136 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.500373+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.500373+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.504762+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]': finished 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.504762+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]': finished 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.931171+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.931171+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.939670+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:11.939670+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:12.107773+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:12.107773+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:12.115487+0000 mon.a (mon.0) 889 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:12 vm06 bash[20625]: audit 2026-03-08T23:09:12.115487+0000 mon.a (mon.0) 889 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.129278+0000 mgr.y (mgr.24419) 255 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.5", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.129278+0000 mgr.y (mgr.24419) 255 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.5", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: cephadm 2026-03-08T23:09:11.129702+0000 mgr.y (mgr.24419) 256 : cephadm [INF] Schedule rotate-key daemon osd.5 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: cephadm 2026-03-08T23:09:11.129702+0000 mgr.y (mgr.24419) 256 : cephadm [INF] Schedule rotate-key daemon osd.5 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.472042+0000 mon.c (mon.2) 134 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.472042+0000 mon.c (mon.2) 134 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.473357+0000 mon.c (mon.2) 135 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.473357+0000 mon.c (mon.2) 135 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.481968+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.481968+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.500026+0000 mon.c (mon.2) 136 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.500026+0000 mon.c (mon.2) 136 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.500373+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.500373+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.504762+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]': finished 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.504762+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]': finished 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.931171+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.931171+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.939670+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:11.939670+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:12.107773+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:12.107773+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:12.115487+0000 mon.a (mon.0) 889 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:12 vm06 bash[27746]: audit 2026-03-08T23:09:12.115487+0000 mon.a (mon.0) 889 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.129278+0000 mgr.y (mgr.24419) 255 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.5", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.129278+0000 mgr.y (mgr.24419) 255 : audit [DBG] from='client.24793 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.5", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: cephadm 2026-03-08T23:09:11.129702+0000 mgr.y (mgr.24419) 256 : cephadm [INF] Schedule rotate-key daemon osd.5 2026-03-08T23:09:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: cephadm 2026-03-08T23:09:11.129702+0000 mgr.y (mgr.24419) 256 : cephadm [INF] Schedule rotate-key daemon osd.5 2026-03-08T23:09:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.472042+0000 mon.c (mon.2) 134 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.472042+0000 mon.c (mon.2) 134 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.473357+0000 mon.c (mon.2) 135 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.473357+0000 mon.c (mon.2) 135 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.481968+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.481968+0000 mon.a (mon.0) 883 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.500026+0000 mon.c (mon.2) 136 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.500026+0000 mon.c (mon.2) 136 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.500373+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.500373+0000 mon.a (mon.0) 884 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]: dispatch 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.504762+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]': finished 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.504762+0000 mon.a (mon.0) 885 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.5", "format": "json"}]': finished 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.931171+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.931171+0000 mon.a (mon.0) 886 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.939670+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:11.939670+0000 mon.a (mon.0) 887 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:12.107773+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:12.107773+0000 mon.a (mon.0) 888 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:12.115487+0000 mon.a (mon.0) 889 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:12 vm11 bash[23232]: audit 2026-03-08T23:09:12.115487+0000 mon.a (mon.0) 889 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:12.559 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:09:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:13 vm06 bash[20625]: cephadm 2026-03-08T23:09:11.499646+0000 mgr.y (mgr.24419) 257 : cephadm [INF] Rotating authentication key for osd.5 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:13 vm06 bash[20625]: cephadm 2026-03-08T23:09:11.499646+0000 mgr.y (mgr.24419) 257 : cephadm [INF] Rotating authentication key for osd.5 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:13 vm06 bash[20625]: cephadm 2026-03-08T23:09:11.513606+0000 mgr.y (mgr.24419) 258 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:13 vm06 bash[20625]: cephadm 2026-03-08T23:09:11.513606+0000 mgr.y (mgr.24419) 258 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:13 vm06 bash[20625]: cluster 2026-03-08T23:09:11.685153+0000 mgr.y (mgr.24419) 259 : cluster [DBG] pgmap v171: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:13 vm06 bash[20625]: cluster 2026-03-08T23:09:11.685153+0000 mgr.y (mgr.24419) 259 : cluster [DBG] pgmap v171: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:13 vm06 bash[27746]: cephadm 2026-03-08T23:09:11.499646+0000 mgr.y (mgr.24419) 257 : cephadm [INF] Rotating authentication key for osd.5 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:13 vm06 bash[27746]: cephadm 2026-03-08T23:09:11.499646+0000 mgr.y (mgr.24419) 257 : cephadm [INF] Rotating authentication key for osd.5 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:13 vm06 bash[27746]: cephadm 2026-03-08T23:09:11.513606+0000 mgr.y (mgr.24419) 258 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:13 vm06 bash[27746]: cephadm 2026-03-08T23:09:11.513606+0000 mgr.y (mgr.24419) 258 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:13 vm06 bash[27746]: cluster 2026-03-08T23:09:11.685153+0000 mgr.y (mgr.24419) 259 : cluster [DBG] pgmap v171: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:13 vm06 bash[27746]: cluster 2026-03-08T23:09:11.685153+0000 mgr.y (mgr.24419) 259 : cluster [DBG] pgmap v171: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:13 vm11 bash[23232]: cephadm 2026-03-08T23:09:11.499646+0000 mgr.y (mgr.24419) 257 : cephadm [INF] Rotating authentication key for osd.5 2026-03-08T23:09:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:13 vm11 bash[23232]: cephadm 2026-03-08T23:09:11.499646+0000 mgr.y (mgr.24419) 257 : cephadm [INF] Rotating authentication key for osd.5 2026-03-08T23:09:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:13 vm11 bash[23232]: cephadm 2026-03-08T23:09:11.513606+0000 mgr.y (mgr.24419) 258 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-08T23:09:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:13 vm11 bash[23232]: cephadm 2026-03-08T23:09:11.513606+0000 mgr.y (mgr.24419) 258 : cephadm [INF] Reconfiguring daemon osd.5 on vm11 2026-03-08T23:09:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:13 vm11 bash[23232]: cluster 2026-03-08T23:09:11.685153+0000 mgr.y (mgr.24419) 259 : cluster [DBG] pgmap v171: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:13 vm11 bash[23232]: cluster 2026-03-08T23:09:11.685153+0000 mgr.y (mgr.24419) 259 : cluster [DBG] pgmap v171: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:14 vm06 bash[20625]: audit 2026-03-08T23:09:12.328587+0000 mgr.y (mgr.24419) 260 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:14 vm06 bash[20625]: audit 2026-03-08T23:09:12.328587+0000 mgr.y (mgr.24419) 260 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:14.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:14 vm06 bash[27746]: audit 2026-03-08T23:09:12.328587+0000 mgr.y (mgr.24419) 260 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:14.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:14 vm06 bash[27746]: audit 2026-03-08T23:09:12.328587+0000 mgr.y (mgr.24419) 260 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:14 vm11 bash[23232]: audit 2026-03-08T23:09:12.328587+0000 mgr.y (mgr.24419) 260 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:14 vm11 bash[23232]: audit 2026-03-08T23:09:12.328587+0000 mgr.y (mgr.24419) 260 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:15 vm06 bash[20625]: cluster 2026-03-08T23:09:13.685425+0000 mgr.y (mgr.24419) 261 : cluster [DBG] pgmap v172: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:15 vm06 bash[20625]: cluster 2026-03-08T23:09:13.685425+0000 mgr.y (mgr.24419) 261 : cluster [DBG] pgmap v172: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:15 vm06 bash[27746]: cluster 2026-03-08T23:09:13.685425+0000 mgr.y (mgr.24419) 261 : cluster [DBG] pgmap v172: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:15 vm06 bash[27746]: cluster 2026-03-08T23:09:13.685425+0000 mgr.y (mgr.24419) 261 : cluster [DBG] pgmap v172: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:15.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:15 vm11 bash[23232]: cluster 2026-03-08T23:09:13.685425+0000 mgr.y (mgr.24419) 261 : cluster [DBG] pgmap v172: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:15.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:15 vm11 bash[23232]: cluster 2026-03-08T23:09:13.685425+0000 mgr.y (mgr.24419) 261 : cluster [DBG] pgmap v172: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:16.177 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:16.375 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:16.375 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:16.375 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:16.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:16 vm06 bash[20625]: cluster 2026-03-08T23:09:15.685899+0000 mgr.y (mgr.24419) 262 : cluster [DBG] pgmap v173: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:16.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:16 vm06 bash[20625]: cluster 2026-03-08T23:09:15.685899+0000 mgr.y (mgr.24419) 262 : cluster [DBG] pgmap v173: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:16.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:16 vm06 bash[27746]: cluster 2026-03-08T23:09:15.685899+0000 mgr.y (mgr.24419) 262 : cluster [DBG] pgmap v173: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:16.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:16 vm06 bash[27746]: cluster 2026-03-08T23:09:15.685899+0000 mgr.y (mgr.24419) 262 : cluster [DBG] pgmap v173: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:16.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:16 vm11 bash[23232]: cluster 2026-03-08T23:09:15.685899+0000 mgr.y (mgr.24419) 262 : cluster [DBG] pgmap v173: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:16.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:16 vm11 bash[23232]: cluster 2026-03-08T23:09:15.685899+0000 mgr.y (mgr.24419) 262 : cluster [DBG] pgmap v173: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:17 vm06 bash[20625]: audit 2026-03-08T23:09:16.366170+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.106:0/3707048545' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:17 vm06 bash[20625]: audit 2026-03-08T23:09:16.366170+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.106:0/3707048545' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:17 vm06 bash[27746]: audit 2026-03-08T23:09:16.366170+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.106:0/3707048545' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:17 vm06 bash[27746]: audit 2026-03-08T23:09:16.366170+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.106:0/3707048545' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:17.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:17 vm11 bash[23232]: audit 2026-03-08T23:09:16.366170+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.106:0/3707048545' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:17.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:17 vm11 bash[23232]: audit 2026-03-08T23:09:16.366170+0000 mon.c (mon.2) 137 : audit [INF] from='client.? 192.168.123.106:0/3707048545' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:18 vm06 bash[20625]: cluster 2026-03-08T23:09:17.686130+0000 mgr.y (mgr.24419) 263 : cluster [DBG] pgmap v174: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:18.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:18 vm06 bash[20625]: cluster 2026-03-08T23:09:17.686130+0000 mgr.y (mgr.24419) 263 : cluster [DBG] pgmap v174: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:18.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:18 vm06 bash[27746]: cluster 2026-03-08T23:09:17.686130+0000 mgr.y (mgr.24419) 263 : cluster [DBG] pgmap v174: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:18.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:18 vm06 bash[27746]: cluster 2026-03-08T23:09:17.686130+0000 mgr.y (mgr.24419) 263 : cluster [DBG] pgmap v174: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:18.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:18 vm11 bash[23232]: cluster 2026-03-08T23:09:17.686130+0000 mgr.y (mgr.24419) 263 : cluster [DBG] pgmap v174: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:18.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:18 vm11 bash[23232]: cluster 2026-03-08T23:09:17.686130+0000 mgr.y (mgr.24419) 263 : cluster [DBG] pgmap v174: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:20 vm06 bash[20625]: cluster 2026-03-08T23:09:19.686410+0000 mgr.y (mgr.24419) 264 : cluster [DBG] pgmap v175: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:20 vm06 bash[20625]: cluster 2026-03-08T23:09:19.686410+0000 mgr.y (mgr.24419) 264 : cluster [DBG] pgmap v175: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:09:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:09:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:09:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:20 vm06 bash[27746]: cluster 2026-03-08T23:09:19.686410+0000 mgr.y (mgr.24419) 264 : cluster [DBG] pgmap v175: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:20 vm06 bash[27746]: cluster 2026-03-08T23:09:19.686410+0000 mgr.y (mgr.24419) 264 : cluster [DBG] pgmap v175: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:20 vm11 bash[23232]: cluster 2026-03-08T23:09:19.686410+0000 mgr.y (mgr.24419) 264 : cluster [DBG] pgmap v175: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:20 vm11 bash[23232]: cluster 2026-03-08T23:09:19.686410+0000 mgr.y (mgr.24419) 264 : cluster [DBG] pgmap v175: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:21.376 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:21.556 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:21.557 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:21.557 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:22.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:21 vm06 bash[20625]: audit 2026-03-08T23:09:21.546507+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.106:0/314897057' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:22.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:21 vm06 bash[20625]: audit 2026-03-08T23:09:21.546507+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.106:0/314897057' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:22.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:21 vm06 bash[27746]: audit 2026-03-08T23:09:21.546507+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.106:0/314897057' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:22.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:21 vm06 bash[27746]: audit 2026-03-08T23:09:21.546507+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.106:0/314897057' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:22.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:21 vm11 bash[23232]: audit 2026-03-08T23:09:21.546507+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.106:0/314897057' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:22.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:21 vm11 bash[23232]: audit 2026-03-08T23:09:21.546507+0000 mon.b (mon.1) 46 : audit [INF] from='client.? 192.168.123.106:0/314897057' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:22.775 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:09:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:09:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:22 vm06 bash[20625]: cluster 2026-03-08T23:09:21.686936+0000 mgr.y (mgr.24419) 265 : cluster [DBG] pgmap v176: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:22 vm06 bash[20625]: cluster 2026-03-08T23:09:21.686936+0000 mgr.y (mgr.24419) 265 : cluster [DBG] pgmap v176: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:22 vm06 bash[27746]: cluster 2026-03-08T23:09:21.686936+0000 mgr.y (mgr.24419) 265 : cluster [DBG] pgmap v176: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:22 vm06 bash[27746]: cluster 2026-03-08T23:09:21.686936+0000 mgr.y (mgr.24419) 265 : cluster [DBG] pgmap v176: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:22 vm11 bash[23232]: cluster 2026-03-08T23:09:21.686936+0000 mgr.y (mgr.24419) 265 : cluster [DBG] pgmap v176: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:22 vm11 bash[23232]: cluster 2026-03-08T23:09:21.686936+0000 mgr.y (mgr.24419) 265 : cluster [DBG] pgmap v176: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:23 vm11 bash[23232]: audit 2026-03-08T23:09:22.336998+0000 mgr.y (mgr.24419) 266 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:23 vm11 bash[23232]: audit 2026-03-08T23:09:22.336998+0000 mgr.y (mgr.24419) 266 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:23 vm11 bash[23232]: audit 2026-03-08T23:09:22.879234+0000 mon.c (mon.2) 138 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:23 vm11 bash[23232]: audit 2026-03-08T23:09:22.879234+0000 mon.c (mon.2) 138 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:23 vm06 bash[20625]: audit 2026-03-08T23:09:22.336998+0000 mgr.y (mgr.24419) 266 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:23 vm06 bash[20625]: audit 2026-03-08T23:09:22.336998+0000 mgr.y (mgr.24419) 266 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:23 vm06 bash[20625]: audit 2026-03-08T23:09:22.879234+0000 mon.c (mon.2) 138 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:23 vm06 bash[20625]: audit 2026-03-08T23:09:22.879234+0000 mon.c (mon.2) 138 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:23 vm06 bash[27746]: audit 2026-03-08T23:09:22.336998+0000 mgr.y (mgr.24419) 266 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:23 vm06 bash[27746]: audit 2026-03-08T23:09:22.336998+0000 mgr.y (mgr.24419) 266 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:23 vm06 bash[27746]: audit 2026-03-08T23:09:22.879234+0000 mon.c (mon.2) 138 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:24.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:23 vm06 bash[27746]: audit 2026-03-08T23:09:22.879234+0000 mon.c (mon.2) 138 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:24 vm11 bash[23232]: cluster 2026-03-08T23:09:23.687282+0000 mgr.y (mgr.24419) 267 : cluster [DBG] pgmap v177: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:24 vm11 bash[23232]: cluster 2026-03-08T23:09:23.687282+0000 mgr.y (mgr.24419) 267 : cluster [DBG] pgmap v177: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:24 vm06 bash[20625]: cluster 2026-03-08T23:09:23.687282+0000 mgr.y (mgr.24419) 267 : cluster [DBG] pgmap v177: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:25.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:24 vm06 bash[20625]: cluster 2026-03-08T23:09:23.687282+0000 mgr.y (mgr.24419) 267 : cluster [DBG] pgmap v177: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:24 vm06 bash[27746]: cluster 2026-03-08T23:09:23.687282+0000 mgr.y (mgr.24419) 267 : cluster [DBG] pgmap v177: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:25.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:24 vm06 bash[27746]: cluster 2026-03-08T23:09:23.687282+0000 mgr.y (mgr.24419) 267 : cluster [DBG] pgmap v177: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:26.558 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:26.734 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:26.734 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:26.734 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:27.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:26 vm11 bash[23232]: cluster 2026-03-08T23:09:25.687791+0000 mgr.y (mgr.24419) 268 : cluster [DBG] pgmap v178: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:27.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:26 vm11 bash[23232]: cluster 2026-03-08T23:09:25.687791+0000 mgr.y (mgr.24419) 268 : cluster [DBG] pgmap v178: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:27.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:26 vm11 bash[23232]: audit 2026-03-08T23:09:26.726260+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.106:0/3979296041' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:27.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:26 vm11 bash[23232]: audit 2026-03-08T23:09:26.726260+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.106:0/3979296041' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:26 vm06 bash[27746]: cluster 2026-03-08T23:09:25.687791+0000 mgr.y (mgr.24419) 268 : cluster [DBG] pgmap v178: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:26 vm06 bash[27746]: cluster 2026-03-08T23:09:25.687791+0000 mgr.y (mgr.24419) 268 : cluster [DBG] pgmap v178: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:26 vm06 bash[27746]: audit 2026-03-08T23:09:26.726260+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.106:0/3979296041' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:26 vm06 bash[27746]: audit 2026-03-08T23:09:26.726260+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.106:0/3979296041' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:26 vm06 bash[20625]: cluster 2026-03-08T23:09:25.687791+0000 mgr.y (mgr.24419) 268 : cluster [DBG] pgmap v178: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:26 vm06 bash[20625]: cluster 2026-03-08T23:09:25.687791+0000 mgr.y (mgr.24419) 268 : cluster [DBG] pgmap v178: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:26 vm06 bash[20625]: audit 2026-03-08T23:09:26.726260+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.106:0/3979296041' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:27.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:26 vm06 bash[20625]: audit 2026-03-08T23:09:26.726260+0000 mon.c (mon.2) 139 : audit [INF] from='client.? 192.168.123.106:0/3979296041' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:29.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:28 vm11 bash[23232]: cluster 2026-03-08T23:09:27.688096+0000 mgr.y (mgr.24419) 269 : cluster [DBG] pgmap v179: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:29.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:28 vm11 bash[23232]: cluster 2026-03-08T23:09:27.688096+0000 mgr.y (mgr.24419) 269 : cluster [DBG] pgmap v179: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:28 vm06 bash[20625]: cluster 2026-03-08T23:09:27.688096+0000 mgr.y (mgr.24419) 269 : cluster [DBG] pgmap v179: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:28 vm06 bash[20625]: cluster 2026-03-08T23:09:27.688096+0000 mgr.y (mgr.24419) 269 : cluster [DBG] pgmap v179: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:28 vm06 bash[27746]: cluster 2026-03-08T23:09:27.688096+0000 mgr.y (mgr.24419) 269 : cluster [DBG] pgmap v179: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:28 vm06 bash[27746]: cluster 2026-03-08T23:09:27.688096+0000 mgr.y (mgr.24419) 269 : cluster [DBG] pgmap v179: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:30 vm06 bash[20625]: cluster 2026-03-08T23:09:29.688407+0000 mgr.y (mgr.24419) 270 : cluster [DBG] pgmap v180: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:30 vm06 bash[20625]: cluster 2026-03-08T23:09:29.688407+0000 mgr.y (mgr.24419) 270 : cluster [DBG] pgmap v180: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:09:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:09:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:09:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:30 vm06 bash[27746]: cluster 2026-03-08T23:09:29.688407+0000 mgr.y (mgr.24419) 270 : cluster [DBG] pgmap v180: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:30 vm06 bash[27746]: cluster 2026-03-08T23:09:29.688407+0000 mgr.y (mgr.24419) 270 : cluster [DBG] pgmap v180: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:30 vm11 bash[23232]: cluster 2026-03-08T23:09:29.688407+0000 mgr.y (mgr.24419) 270 : cluster [DBG] pgmap v180: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:30 vm11 bash[23232]: cluster 2026-03-08T23:09:29.688407+0000 mgr.y (mgr.24419) 270 : cluster [DBG] pgmap v180: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:31.735 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:31.922 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:31.922 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:31.922 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:32.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:09:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:32 vm06 bash[20625]: cluster 2026-03-08T23:09:31.688847+0000 mgr.y (mgr.24419) 271 : cluster [DBG] pgmap v181: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:32 vm06 bash[20625]: cluster 2026-03-08T23:09:31.688847+0000 mgr.y (mgr.24419) 271 : cluster [DBG] pgmap v181: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:32 vm06 bash[20625]: audit 2026-03-08T23:09:31.913835+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.106:0/2271869287' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:32 vm06 bash[20625]: audit 2026-03-08T23:09:31.913835+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.106:0/2271869287' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:32 vm06 bash[27746]: cluster 2026-03-08T23:09:31.688847+0000 mgr.y (mgr.24419) 271 : cluster [DBG] pgmap v181: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:32 vm06 bash[27746]: cluster 2026-03-08T23:09:31.688847+0000 mgr.y (mgr.24419) 271 : cluster [DBG] pgmap v181: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:32 vm06 bash[27746]: audit 2026-03-08T23:09:31.913835+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.106:0/2271869287' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:32 vm06 bash[27746]: audit 2026-03-08T23:09:31.913835+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.106:0/2271869287' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:33.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:32 vm11 bash[23232]: cluster 2026-03-08T23:09:31.688847+0000 mgr.y (mgr.24419) 271 : cluster [DBG] pgmap v181: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:33.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:32 vm11 bash[23232]: cluster 2026-03-08T23:09:31.688847+0000 mgr.y (mgr.24419) 271 : cluster [DBG] pgmap v181: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:33.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:32 vm11 bash[23232]: audit 2026-03-08T23:09:31.913835+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.106:0/2271869287' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:33.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:32 vm11 bash[23232]: audit 2026-03-08T23:09:31.913835+0000 mon.a (mon.0) 890 : audit [INF] from='client.? 192.168.123.106:0/2271869287' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:34.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:33 vm06 bash[20625]: audit 2026-03-08T23:09:32.339770+0000 mgr.y (mgr.24419) 272 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:34.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:33 vm06 bash[20625]: audit 2026-03-08T23:09:32.339770+0000 mgr.y (mgr.24419) 272 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:34.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:33 vm06 bash[27746]: audit 2026-03-08T23:09:32.339770+0000 mgr.y (mgr.24419) 272 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:34.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:33 vm06 bash[27746]: audit 2026-03-08T23:09:32.339770+0000 mgr.y (mgr.24419) 272 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:33 vm11 bash[23232]: audit 2026-03-08T23:09:32.339770+0000 mgr.y (mgr.24419) 272 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:33 vm11 bash[23232]: audit 2026-03-08T23:09:32.339770+0000 mgr.y (mgr.24419) 272 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:34 vm06 bash[20625]: cluster 2026-03-08T23:09:33.689110+0000 mgr.y (mgr.24419) 273 : cluster [DBG] pgmap v182: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:34 vm06 bash[20625]: cluster 2026-03-08T23:09:33.689110+0000 mgr.y (mgr.24419) 273 : cluster [DBG] pgmap v182: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:34 vm06 bash[27746]: cluster 2026-03-08T23:09:33.689110+0000 mgr.y (mgr.24419) 273 : cluster [DBG] pgmap v182: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:34 vm06 bash[27746]: cluster 2026-03-08T23:09:33.689110+0000 mgr.y (mgr.24419) 273 : cluster [DBG] pgmap v182: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:35.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:34 vm11 bash[23232]: cluster 2026-03-08T23:09:33.689110+0000 mgr.y (mgr.24419) 273 : cluster [DBG] pgmap v182: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:35.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:34 vm11 bash[23232]: cluster 2026-03-08T23:09:33.689110+0000 mgr.y (mgr.24419) 273 : cluster [DBG] pgmap v182: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:36.923 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:37.104 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:37.104 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:37.104 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:36 vm06 bash[20625]: cluster 2026-03-08T23:09:35.689532+0000 mgr.y (mgr.24419) 274 : cluster [DBG] pgmap v183: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:36 vm06 bash[20625]: cluster 2026-03-08T23:09:35.689532+0000 mgr.y (mgr.24419) 274 : cluster [DBG] pgmap v183: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:36 vm06 bash[27746]: cluster 2026-03-08T23:09:35.689532+0000 mgr.y (mgr.24419) 274 : cluster [DBG] pgmap v183: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:36 vm06 bash[27746]: cluster 2026-03-08T23:09:35.689532+0000 mgr.y (mgr.24419) 274 : cluster [DBG] pgmap v183: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:37.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:36 vm11 bash[23232]: cluster 2026-03-08T23:09:35.689532+0000 mgr.y (mgr.24419) 274 : cluster [DBG] pgmap v183: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:37.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:36 vm11 bash[23232]: cluster 2026-03-08T23:09:35.689532+0000 mgr.y (mgr.24419) 274 : cluster [DBG] pgmap v183: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:37 vm06 bash[20625]: audit 2026-03-08T23:09:37.096568+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.106:0/88391355' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:38.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:37 vm06 bash[20625]: audit 2026-03-08T23:09:37.096568+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.106:0/88391355' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:37 vm06 bash[27746]: audit 2026-03-08T23:09:37.096568+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.106:0/88391355' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:38.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:37 vm06 bash[27746]: audit 2026-03-08T23:09:37.096568+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.106:0/88391355' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:38.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:37 vm11 bash[23232]: audit 2026-03-08T23:09:37.096568+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.106:0/88391355' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:38.310 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:37 vm11 bash[23232]: audit 2026-03-08T23:09:37.096568+0000 mon.c (mon.2) 140 : audit [INF] from='client.? 192.168.123.106:0/88391355' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:38 vm06 bash[20625]: cluster 2026-03-08T23:09:37.689805+0000 mgr.y (mgr.24419) 275 : cluster [DBG] pgmap v184: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:38 vm06 bash[20625]: cluster 2026-03-08T23:09:37.689805+0000 mgr.y (mgr.24419) 275 : cluster [DBG] pgmap v184: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:38 vm06 bash[20625]: audit 2026-03-08T23:09:37.886259+0000 mon.c (mon.2) 141 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:38 vm06 bash[20625]: audit 2026-03-08T23:09:37.886259+0000 mon.c (mon.2) 141 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:38 vm06 bash[27746]: cluster 2026-03-08T23:09:37.689805+0000 mgr.y (mgr.24419) 275 : cluster [DBG] pgmap v184: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:38 vm06 bash[27746]: cluster 2026-03-08T23:09:37.689805+0000 mgr.y (mgr.24419) 275 : cluster [DBG] pgmap v184: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:38 vm06 bash[27746]: audit 2026-03-08T23:09:37.886259+0000 mon.c (mon.2) 141 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:38 vm06 bash[27746]: audit 2026-03-08T23:09:37.886259+0000 mon.c (mon.2) 141 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:38 vm11 bash[23232]: cluster 2026-03-08T23:09:37.689805+0000 mgr.y (mgr.24419) 275 : cluster [DBG] pgmap v184: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:38 vm11 bash[23232]: cluster 2026-03-08T23:09:37.689805+0000 mgr.y (mgr.24419) 275 : cluster [DBG] pgmap v184: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:38 vm11 bash[23232]: audit 2026-03-08T23:09:37.886259+0000 mon.c (mon.2) 141 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:38 vm11 bash[23232]: audit 2026-03-08T23:09:37.886259+0000 mon.c (mon.2) 141 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:40 vm06 bash[20625]: cluster 2026-03-08T23:09:39.690217+0000 mgr.y (mgr.24419) 276 : cluster [DBG] pgmap v185: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:40 vm06 bash[20625]: cluster 2026-03-08T23:09:39.690217+0000 mgr.y (mgr.24419) 276 : cluster [DBG] pgmap v185: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:09:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:09:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:09:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:40 vm06 bash[27746]: cluster 2026-03-08T23:09:39.690217+0000 mgr.y (mgr.24419) 276 : cluster [DBG] pgmap v185: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:40 vm06 bash[27746]: cluster 2026-03-08T23:09:39.690217+0000 mgr.y (mgr.24419) 276 : cluster [DBG] pgmap v185: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:40 vm11 bash[23232]: cluster 2026-03-08T23:09:39.690217+0000 mgr.y (mgr.24419) 276 : cluster [DBG] pgmap v185: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:40 vm11 bash[23232]: cluster 2026-03-08T23:09:39.690217+0000 mgr.y (mgr.24419) 276 : cluster [DBG] pgmap v185: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:42.106 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:42.283 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:42.283 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:42.283 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:42.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:09:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:42 vm06 bash[20625]: cluster 2026-03-08T23:09:41.690698+0000 mgr.y (mgr.24419) 277 : cluster [DBG] pgmap v186: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:42 vm06 bash[20625]: cluster 2026-03-08T23:09:41.690698+0000 mgr.y (mgr.24419) 277 : cluster [DBG] pgmap v186: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:42 vm06 bash[20625]: audit 2026-03-08T23:09:42.275632+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.106:0/1574621420' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:42 vm06 bash[20625]: audit 2026-03-08T23:09:42.275632+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.106:0/1574621420' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:42 vm06 bash[27746]: cluster 2026-03-08T23:09:41.690698+0000 mgr.y (mgr.24419) 277 : cluster [DBG] pgmap v186: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:42 vm06 bash[27746]: cluster 2026-03-08T23:09:41.690698+0000 mgr.y (mgr.24419) 277 : cluster [DBG] pgmap v186: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:42 vm06 bash[27746]: audit 2026-03-08T23:09:42.275632+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.106:0/1574621420' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:42 vm06 bash[27746]: audit 2026-03-08T23:09:42.275632+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.106:0/1574621420' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:42 vm11 bash[23232]: cluster 2026-03-08T23:09:41.690698+0000 mgr.y (mgr.24419) 277 : cluster [DBG] pgmap v186: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:42 vm11 bash[23232]: cluster 2026-03-08T23:09:41.690698+0000 mgr.y (mgr.24419) 277 : cluster [DBG] pgmap v186: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:42 vm11 bash[23232]: audit 2026-03-08T23:09:42.275632+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.106:0/1574621420' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:42 vm11 bash[23232]: audit 2026-03-08T23:09:42.275632+0000 mon.a (mon.0) 891 : audit [INF] from='client.? 192.168.123.106:0/1574621420' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:44.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:43 vm06 bash[20625]: audit 2026-03-08T23:09:42.350458+0000 mgr.y (mgr.24419) 278 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:44.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:43 vm06 bash[20625]: audit 2026-03-08T23:09:42.350458+0000 mgr.y (mgr.24419) 278 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:44.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:43 vm06 bash[27746]: audit 2026-03-08T23:09:42.350458+0000 mgr.y (mgr.24419) 278 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:44.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:43 vm06 bash[27746]: audit 2026-03-08T23:09:42.350458+0000 mgr.y (mgr.24419) 278 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:44.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:43 vm11 bash[23232]: audit 2026-03-08T23:09:42.350458+0000 mgr.y (mgr.24419) 278 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:44.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:43 vm11 bash[23232]: audit 2026-03-08T23:09:42.350458+0000 mgr.y (mgr.24419) 278 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:45.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:44 vm06 bash[20625]: cluster 2026-03-08T23:09:43.690940+0000 mgr.y (mgr.24419) 279 : cluster [DBG] pgmap v187: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:45.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:44 vm06 bash[20625]: cluster 2026-03-08T23:09:43.690940+0000 mgr.y (mgr.24419) 279 : cluster [DBG] pgmap v187: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:45.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:44 vm06 bash[27746]: cluster 2026-03-08T23:09:43.690940+0000 mgr.y (mgr.24419) 279 : cluster [DBG] pgmap v187: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:45.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:44 vm06 bash[27746]: cluster 2026-03-08T23:09:43.690940+0000 mgr.y (mgr.24419) 279 : cluster [DBG] pgmap v187: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:44 vm11 bash[23232]: cluster 2026-03-08T23:09:43.690940+0000 mgr.y (mgr.24419) 279 : cluster [DBG] pgmap v187: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:44 vm11 bash[23232]: cluster 2026-03-08T23:09:43.690940+0000 mgr.y (mgr.24419) 279 : cluster [DBG] pgmap v187: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:46 vm06 bash[20625]: cluster 2026-03-08T23:09:45.691461+0000 mgr.y (mgr.24419) 280 : cluster [DBG] pgmap v188: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:46 vm06 bash[20625]: cluster 2026-03-08T23:09:45.691461+0000 mgr.y (mgr.24419) 280 : cluster [DBG] pgmap v188: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:46 vm06 bash[27746]: cluster 2026-03-08T23:09:45.691461+0000 mgr.y (mgr.24419) 280 : cluster [DBG] pgmap v188: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:46 vm06 bash[27746]: cluster 2026-03-08T23:09:45.691461+0000 mgr.y (mgr.24419) 280 : cluster [DBG] pgmap v188: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:47.285 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:46 vm11 bash[23232]: cluster 2026-03-08T23:09:45.691461+0000 mgr.y (mgr.24419) 280 : cluster [DBG] pgmap v188: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:46 vm11 bash[23232]: cluster 2026-03-08T23:09:45.691461+0000 mgr.y (mgr.24419) 280 : cluster [DBG] pgmap v188: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:47.468 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== 2026-03-08T23:09:47.468 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== ']' 2026-03-08T23:09:47.468 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:48.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:47 vm06 bash[20625]: audit 2026-03-08T23:09:47.459995+0000 mon.a (mon.0) 892 : audit [INF] from='client.? 192.168.123.106:0/3024747658' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:48.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:47 vm06 bash[20625]: audit 2026-03-08T23:09:47.459995+0000 mon.a (mon.0) 892 : audit [INF] from='client.? 192.168.123.106:0/3024747658' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:48.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:47 vm06 bash[27746]: audit 2026-03-08T23:09:47.459995+0000 mon.a (mon.0) 892 : audit [INF] from='client.? 192.168.123.106:0/3024747658' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:48.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:47 vm06 bash[27746]: audit 2026-03-08T23:09:47.459995+0000 mon.a (mon.0) 892 : audit [INF] from='client.? 192.168.123.106:0/3024747658' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:48.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:47 vm11 bash[23232]: audit 2026-03-08T23:09:47.459995+0000 mon.a (mon.0) 892 : audit [INF] from='client.? 192.168.123.106:0/3024747658' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:48.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:47 vm11 bash[23232]: audit 2026-03-08T23:09:47.459995+0000 mon.a (mon.0) 892 : audit [INF] from='client.? 192.168.123.106:0/3024747658' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:49 vm06 bash[20625]: cluster 2026-03-08T23:09:47.691781+0000 mgr.y (mgr.24419) 281 : cluster [DBG] pgmap v189: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:49 vm06 bash[20625]: cluster 2026-03-08T23:09:47.691781+0000 mgr.y (mgr.24419) 281 : cluster [DBG] pgmap v189: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:49 vm06 bash[27746]: cluster 2026-03-08T23:09:47.691781+0000 mgr.y (mgr.24419) 281 : cluster [DBG] pgmap v189: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:49 vm06 bash[27746]: cluster 2026-03-08T23:09:47.691781+0000 mgr.y (mgr.24419) 281 : cluster [DBG] pgmap v189: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:49 vm11 bash[23232]: cluster 2026-03-08T23:09:47.691781+0000 mgr.y (mgr.24419) 281 : cluster [DBG] pgmap v189: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:49 vm11 bash[23232]: cluster 2026-03-08T23:09:47.691781+0000 mgr.y (mgr.24419) 281 : cluster [DBG] pgmap v189: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:09:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:09:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:09:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:51 vm06 bash[20625]: cluster 2026-03-08T23:09:49.692082+0000 mgr.y (mgr.24419) 282 : cluster [DBG] pgmap v190: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:51 vm06 bash[20625]: cluster 2026-03-08T23:09:49.692082+0000 mgr.y (mgr.24419) 282 : cluster [DBG] pgmap v190: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:51 vm06 bash[27746]: cluster 2026-03-08T23:09:49.692082+0000 mgr.y (mgr.24419) 282 : cluster [DBG] pgmap v190: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:51 vm06 bash[27746]: cluster 2026-03-08T23:09:49.692082+0000 mgr.y (mgr.24419) 282 : cluster [DBG] pgmap v190: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:51 vm11 bash[23232]: cluster 2026-03-08T23:09:49.692082+0000 mgr.y (mgr.24419) 282 : cluster [DBG] pgmap v190: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:51 vm11 bash[23232]: cluster 2026-03-08T23:09:49.692082+0000 mgr.y (mgr.24419) 282 : cluster [DBG] pgmap v190: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:52.470 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.5 2026-03-08T23:09:52.658 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCXAa5pgKbTHRAAEoBwd62Gv9dB7coGC4fg9Q== 2026-03-08T23:09:52.658 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQC5/61p4vkvGxAAb+Yf0/QpeuyBuji9Z2DDyg== == AQCXAa5pgKbTHRAAEoBwd62Gv9dB7coGC4fg9Q== ']' 2026-03-08T23:09:52.658 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:09:52.658 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.6' 2026-03-08T23:09:52.658 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.6 2026-03-08T23:09:52.659 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.6 2026-03-08T23:09:52.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:09:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:09:52.841 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== 2026-03-08T23:09:52.842 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== 2026-03-08T23:09:52.842 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.6 2026-03-08T23:09:53.012 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.6 on host 'vm11' 2026-03-08T23:09:53.024 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== == AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== ']' 2026-03-08T23:09:53.024 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: cluster 2026-03-08T23:09:51.692516+0000 mgr.y (mgr.24419) 283 : cluster [DBG] pgmap v191: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: cluster 2026-03-08T23:09:51.692516+0000 mgr.y (mgr.24419) 283 : cluster [DBG] pgmap v191: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:52.649881+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.106:0/831676926' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:52.649881+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.106:0/831676926' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:52.833670+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.106:0/1281867745' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:52.833670+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.106:0/1281867745' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:52.892738+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:52.892738+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.002014+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.002014+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.010406+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.010406+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.014010+0000 mon.c (mon.2) 144 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.014010+0000 mon.c (mon.2) 144 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.015929+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.015929+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.017126+0000 mon.c (mon.2) 146 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.017126+0000 mon.c (mon.2) 146 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.022432+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.022432+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.036146+0000 mon.c (mon.2) 147 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.036146+0000 mon.c (mon.2) 147 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.036362+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.036362+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.038477+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]': finished 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:53 vm06 bash[20625]: audit 2026-03-08T23:09:53.038477+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]': finished 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: cluster 2026-03-08T23:09:51.692516+0000 mgr.y (mgr.24419) 283 : cluster [DBG] pgmap v191: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: cluster 2026-03-08T23:09:51.692516+0000 mgr.y (mgr.24419) 283 : cluster [DBG] pgmap v191: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:52.649881+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.106:0/831676926' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:52.649881+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.106:0/831676926' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:52.833670+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.106:0/1281867745' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:52.833670+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.106:0/1281867745' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:52.892738+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:52.892738+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.002014+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.002014+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.010406+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.010406+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.014010+0000 mon.c (mon.2) 144 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.014010+0000 mon.c (mon.2) 144 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.015929+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.015929+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.017126+0000 mon.c (mon.2) 146 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.017126+0000 mon.c (mon.2) 146 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.022432+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.022432+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.036146+0000 mon.c (mon.2) 147 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.036146+0000 mon.c (mon.2) 147 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.036362+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.036362+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.038477+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]': finished 2026-03-08T23:09:53.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:53 vm06 bash[27746]: audit 2026-03-08T23:09:53.038477+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]': finished 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: cluster 2026-03-08T23:09:51.692516+0000 mgr.y (mgr.24419) 283 : cluster [DBG] pgmap v191: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: cluster 2026-03-08T23:09:51.692516+0000 mgr.y (mgr.24419) 283 : cluster [DBG] pgmap v191: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:52.649881+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.106:0/831676926' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:52.649881+0000 mon.a (mon.0) 893 : audit [INF] from='client.? 192.168.123.106:0/831676926' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.5"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:52.833670+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.106:0/1281867745' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:52.833670+0000 mon.c (mon.2) 142 : audit [INF] from='client.? 192.168.123.106:0/1281867745' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:52.892738+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:52.892738+0000 mon.c (mon.2) 143 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.002014+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.002014+0000 mon.a (mon.0) 894 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.010406+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.010406+0000 mon.a (mon.0) 895 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.014010+0000 mon.c (mon.2) 144 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.014010+0000 mon.c (mon.2) 144 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.015929+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.015929+0000 mon.c (mon.2) 145 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.017126+0000 mon.c (mon.2) 146 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.017126+0000 mon.c (mon.2) 146 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.022432+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.022432+0000 mon.a (mon.0) 896 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.036146+0000 mon.c (mon.2) 147 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.036146+0000 mon.c (mon.2) 147 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.036362+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.036362+0000 mon.a (mon.0) 897 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]: dispatch 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.038477+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]': finished 2026-03-08T23:09:53.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:53 vm11 bash[23232]: audit 2026-03-08T23:09:53.038477+0000 mon.a (mon.0) 898 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.6", "format": "json"}]': finished 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:52.361142+0000 mgr.y (mgr.24419) 284 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:52.361142+0000 mgr.y (mgr.24419) 284 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:52.996212+0000 mgr.y (mgr.24419) 285 : audit [DBG] from='client.24971 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.6", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:52.996212+0000 mgr.y (mgr.24419) 285 : audit [DBG] from='client.24971 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.6", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: cephadm 2026-03-08T23:09:52.996631+0000 mgr.y (mgr.24419) 286 : cephadm [INF] Schedule rotate-key daemon osd.6 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: cephadm 2026-03-08T23:09:52.996631+0000 mgr.y (mgr.24419) 286 : cephadm [INF] Schedule rotate-key daemon osd.6 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: cephadm 2026-03-08T23:09:53.035910+0000 mgr.y (mgr.24419) 287 : cephadm [INF] Rotating authentication key for osd.6 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: cephadm 2026-03-08T23:09:53.035910+0000 mgr.y (mgr.24419) 287 : cephadm [INF] Rotating authentication key for osd.6 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: cephadm 2026-03-08T23:09:53.042931+0000 mgr.y (mgr.24419) 288 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: cephadm 2026-03-08T23:09:53.042931+0000 mgr.y (mgr.24419) 288 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.437933+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.437933+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.445952+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.445952+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.615682+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.615682+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.623728+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:54 vm06 bash[20625]: audit 2026-03-08T23:09:53.623728+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:52.361142+0000 mgr.y (mgr.24419) 284 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:52.361142+0000 mgr.y (mgr.24419) 284 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:52.996212+0000 mgr.y (mgr.24419) 285 : audit [DBG] from='client.24971 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.6", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:52.996212+0000 mgr.y (mgr.24419) 285 : audit [DBG] from='client.24971 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.6", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: cephadm 2026-03-08T23:09:52.996631+0000 mgr.y (mgr.24419) 286 : cephadm [INF] Schedule rotate-key daemon osd.6 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: cephadm 2026-03-08T23:09:52.996631+0000 mgr.y (mgr.24419) 286 : cephadm [INF] Schedule rotate-key daemon osd.6 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: cephadm 2026-03-08T23:09:53.035910+0000 mgr.y (mgr.24419) 287 : cephadm [INF] Rotating authentication key for osd.6 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: cephadm 2026-03-08T23:09:53.035910+0000 mgr.y (mgr.24419) 287 : cephadm [INF] Rotating authentication key for osd.6 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: cephadm 2026-03-08T23:09:53.042931+0000 mgr.y (mgr.24419) 288 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: cephadm 2026-03-08T23:09:53.042931+0000 mgr.y (mgr.24419) 288 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.437933+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.437933+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.445952+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.445952+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.615682+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.615682+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.623728+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:54 vm06 bash[27746]: audit 2026-03-08T23:09:53.623728+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:52.361142+0000 mgr.y (mgr.24419) 284 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:52.361142+0000 mgr.y (mgr.24419) 284 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:52.996212+0000 mgr.y (mgr.24419) 285 : audit [DBG] from='client.24971 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.6", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:52.996212+0000 mgr.y (mgr.24419) 285 : audit [DBG] from='client.24971 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.6", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: cephadm 2026-03-08T23:09:52.996631+0000 mgr.y (mgr.24419) 286 : cephadm [INF] Schedule rotate-key daemon osd.6 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: cephadm 2026-03-08T23:09:52.996631+0000 mgr.y (mgr.24419) 286 : cephadm [INF] Schedule rotate-key daemon osd.6 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: cephadm 2026-03-08T23:09:53.035910+0000 mgr.y (mgr.24419) 287 : cephadm [INF] Rotating authentication key for osd.6 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: cephadm 2026-03-08T23:09:53.035910+0000 mgr.y (mgr.24419) 287 : cephadm [INF] Rotating authentication key for osd.6 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: cephadm 2026-03-08T23:09:53.042931+0000 mgr.y (mgr.24419) 288 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: cephadm 2026-03-08T23:09:53.042931+0000 mgr.y (mgr.24419) 288 : cephadm [INF] Reconfiguring daemon osd.6 on vm11 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.437933+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.437933+0000 mon.a (mon.0) 899 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.445952+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.445952+0000 mon.a (mon.0) 900 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.615682+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.615682+0000 mon.a (mon.0) 901 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.623728+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:54 vm11 bash[23232]: audit 2026-03-08T23:09:53.623728+0000 mon.a (mon.0) 902 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:09:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:55 vm06 bash[20625]: cluster 2026-03-08T23:09:53.692821+0000 mgr.y (mgr.24419) 289 : cluster [DBG] pgmap v192: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:55 vm06 bash[20625]: cluster 2026-03-08T23:09:53.692821+0000 mgr.y (mgr.24419) 289 : cluster [DBG] pgmap v192: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:55 vm06 bash[27746]: cluster 2026-03-08T23:09:53.692821+0000 mgr.y (mgr.24419) 289 : cluster [DBG] pgmap v192: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:55 vm06 bash[27746]: cluster 2026-03-08T23:09:53.692821+0000 mgr.y (mgr.24419) 289 : cluster [DBG] pgmap v192: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:55 vm11 bash[23232]: cluster 2026-03-08T23:09:53.692821+0000 mgr.y (mgr.24419) 289 : cluster [DBG] pgmap v192: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:55 vm11 bash[23232]: cluster 2026-03-08T23:09:53.692821+0000 mgr.y (mgr.24419) 289 : cluster [DBG] pgmap v192: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:56 vm06 bash[20625]: cluster 2026-03-08T23:09:55.693262+0000 mgr.y (mgr.24419) 290 : cluster [DBG] pgmap v193: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:56.536 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:56 vm06 bash[20625]: cluster 2026-03-08T23:09:55.693262+0000 mgr.y (mgr.24419) 290 : cluster [DBG] pgmap v193: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:56.536 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:56 vm06 bash[27746]: cluster 2026-03-08T23:09:55.693262+0000 mgr.y (mgr.24419) 290 : cluster [DBG] pgmap v193: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:56.536 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:56 vm06 bash[27746]: cluster 2026-03-08T23:09:55.693262+0000 mgr.y (mgr.24419) 290 : cluster [DBG] pgmap v193: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:56 vm11 bash[23232]: cluster 2026-03-08T23:09:55.693262+0000 mgr.y (mgr.24419) 290 : cluster [DBG] pgmap v193: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:56 vm11 bash[23232]: cluster 2026-03-08T23:09:55.693262+0000 mgr.y (mgr.24419) 290 : cluster [DBG] pgmap v193: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:09:58.025 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.6 2026-03-08T23:09:58.212 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== 2026-03-08T23:09:58.212 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== == AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== ']' 2026-03-08T23:09:58.212 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:58 vm06 bash[20625]: cluster 2026-03-08T23:09:57.693551+0000 mgr.y (mgr.24419) 291 : cluster [DBG] pgmap v194: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:58 vm06 bash[20625]: cluster 2026-03-08T23:09:57.693551+0000 mgr.y (mgr.24419) 291 : cluster [DBG] pgmap v194: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:58 vm06 bash[20625]: audit 2026-03-08T23:09:58.201959+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.106:0/722135789' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:09:58 vm06 bash[20625]: audit 2026-03-08T23:09:58.201959+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.106:0/722135789' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:58 vm06 bash[27746]: cluster 2026-03-08T23:09:57.693551+0000 mgr.y (mgr.24419) 291 : cluster [DBG] pgmap v194: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:58 vm06 bash[27746]: cluster 2026-03-08T23:09:57.693551+0000 mgr.y (mgr.24419) 291 : cluster [DBG] pgmap v194: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:58 vm06 bash[27746]: audit 2026-03-08T23:09:58.201959+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.106:0/722135789' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:59.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:09:58 vm06 bash[27746]: audit 2026-03-08T23:09:58.201959+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.106:0/722135789' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:59.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:58 vm11 bash[23232]: cluster 2026-03-08T23:09:57.693551+0000 mgr.y (mgr.24419) 291 : cluster [DBG] pgmap v194: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:59.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:58 vm11 bash[23232]: cluster 2026-03-08T23:09:57.693551+0000 mgr.y (mgr.24419) 291 : cluster [DBG] pgmap v194: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:09:59.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:58 vm11 bash[23232]: audit 2026-03-08T23:09:58.201959+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.106:0/722135789' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:09:59.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:09:58 vm11 bash[23232]: audit 2026-03-08T23:09:58.201959+0000 mon.b (mon.1) 47 : audit [INF] from='client.? 192.168.123.106:0/722135789' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:00 vm06 bash[20625]: cluster 2026-03-08T23:09:59.693831+0000 mgr.y (mgr.24419) 292 : cluster [DBG] pgmap v195: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:00 vm06 bash[20625]: cluster 2026-03-08T23:09:59.693831+0000 mgr.y (mgr.24419) 292 : cluster [DBG] pgmap v195: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:00 vm06 bash[20625]: cluster 2026-03-08T23:10:00.000088+0000 mon.a (mon.0) 903 : cluster [INF] overall HEALTH_OK 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:00 vm06 bash[20625]: cluster 2026-03-08T23:10:00.000088+0000 mon.a (mon.0) 903 : cluster [INF] overall HEALTH_OK 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:10:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:10:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:00 vm06 bash[27746]: cluster 2026-03-08T23:09:59.693831+0000 mgr.y (mgr.24419) 292 : cluster [DBG] pgmap v195: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:00 vm06 bash[27746]: cluster 2026-03-08T23:09:59.693831+0000 mgr.y (mgr.24419) 292 : cluster [DBG] pgmap v195: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:00 vm06 bash[27746]: cluster 2026-03-08T23:10:00.000088+0000 mon.a (mon.0) 903 : cluster [INF] overall HEALTH_OK 2026-03-08T23:10:01.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:00 vm06 bash[27746]: cluster 2026-03-08T23:10:00.000088+0000 mon.a (mon.0) 903 : cluster [INF] overall HEALTH_OK 2026-03-08T23:10:01.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:00 vm11 bash[23232]: cluster 2026-03-08T23:09:59.693831+0000 mgr.y (mgr.24419) 292 : cluster [DBG] pgmap v195: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:01.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:00 vm11 bash[23232]: cluster 2026-03-08T23:09:59.693831+0000 mgr.y (mgr.24419) 292 : cluster [DBG] pgmap v195: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:01.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:00 vm11 bash[23232]: cluster 2026-03-08T23:10:00.000088+0000 mon.a (mon.0) 903 : cluster [INF] overall HEALTH_OK 2026-03-08T23:10:01.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:00 vm11 bash[23232]: cluster 2026-03-08T23:10:00.000088+0000 mon.a (mon.0) 903 : cluster [INF] overall HEALTH_OK 2026-03-08T23:10:02.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:10:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:10:03.213 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.6 2026-03-08T23:10:03.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:02 vm06 bash[20625]: cluster 2026-03-08T23:10:01.694174+0000 mgr.y (mgr.24419) 293 : cluster [DBG] pgmap v196: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:03.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:02 vm06 bash[20625]: cluster 2026-03-08T23:10:01.694174+0000 mgr.y (mgr.24419) 293 : cluster [DBG] pgmap v196: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:02 vm06 bash[27746]: cluster 2026-03-08T23:10:01.694174+0000 mgr.y (mgr.24419) 293 : cluster [DBG] pgmap v196: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:03.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:02 vm06 bash[27746]: cluster 2026-03-08T23:10:01.694174+0000 mgr.y (mgr.24419) 293 : cluster [DBG] pgmap v196: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:03.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:02 vm11 bash[23232]: cluster 2026-03-08T23:10:01.694174+0000 mgr.y (mgr.24419) 293 : cluster [DBG] pgmap v196: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:03.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:02 vm11 bash[23232]: cluster 2026-03-08T23:10:01.694174+0000 mgr.y (mgr.24419) 293 : cluster [DBG] pgmap v196: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:03.395 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== 2026-03-08T23:10:03.395 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== == AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== ']' 2026-03-08T23:10:03.395 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:04.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:03 vm06 bash[20625]: audit 2026-03-08T23:10:02.371539+0000 mgr.y (mgr.24419) 294 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:03 vm06 bash[20625]: audit 2026-03-08T23:10:02.371539+0000 mgr.y (mgr.24419) 294 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:03 vm06 bash[20625]: audit 2026-03-08T23:10:03.387482+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.106:0/3018851835' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:03 vm06 bash[20625]: audit 2026-03-08T23:10:03.387482+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.106:0/3018851835' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:03 vm06 bash[27746]: audit 2026-03-08T23:10:02.371539+0000 mgr.y (mgr.24419) 294 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:03 vm06 bash[27746]: audit 2026-03-08T23:10:02.371539+0000 mgr.y (mgr.24419) 294 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:03 vm06 bash[27746]: audit 2026-03-08T23:10:03.387482+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.106:0/3018851835' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:04.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:03 vm06 bash[27746]: audit 2026-03-08T23:10:03.387482+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.106:0/3018851835' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:03 vm11 bash[23232]: audit 2026-03-08T23:10:02.371539+0000 mgr.y (mgr.24419) 294 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:03 vm11 bash[23232]: audit 2026-03-08T23:10:02.371539+0000 mgr.y (mgr.24419) 294 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:03 vm11 bash[23232]: audit 2026-03-08T23:10:03.387482+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.106:0/3018851835' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:04.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:03 vm11 bash[23232]: audit 2026-03-08T23:10:03.387482+0000 mon.c (mon.2) 148 : audit [INF] from='client.? 192.168.123.106:0/3018851835' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:05.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:04 vm06 bash[20625]: cluster 2026-03-08T23:10:03.694433+0000 mgr.y (mgr.24419) 295 : cluster [DBG] pgmap v197: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:05.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:04 vm06 bash[20625]: cluster 2026-03-08T23:10:03.694433+0000 mgr.y (mgr.24419) 295 : cluster [DBG] pgmap v197: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:04 vm06 bash[27746]: cluster 2026-03-08T23:10:03.694433+0000 mgr.y (mgr.24419) 295 : cluster [DBG] pgmap v197: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:05.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:04 vm06 bash[27746]: cluster 2026-03-08T23:10:03.694433+0000 mgr.y (mgr.24419) 295 : cluster [DBG] pgmap v197: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:04 vm11 bash[23232]: cluster 2026-03-08T23:10:03.694433+0000 mgr.y (mgr.24419) 295 : cluster [DBG] pgmap v197: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:05.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:04 vm11 bash[23232]: cluster 2026-03-08T23:10:03.694433+0000 mgr.y (mgr.24419) 295 : cluster [DBG] pgmap v197: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:07.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:06 vm06 bash[20625]: cluster 2026-03-08T23:10:05.694926+0000 mgr.y (mgr.24419) 296 : cluster [DBG] pgmap v198: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:07.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:06 vm06 bash[20625]: cluster 2026-03-08T23:10:05.694926+0000 mgr.y (mgr.24419) 296 : cluster [DBG] pgmap v198: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:07.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:06 vm06 bash[27746]: cluster 2026-03-08T23:10:05.694926+0000 mgr.y (mgr.24419) 296 : cluster [DBG] pgmap v198: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:07.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:06 vm06 bash[27746]: cluster 2026-03-08T23:10:05.694926+0000 mgr.y (mgr.24419) 296 : cluster [DBG] pgmap v198: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:07.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:06 vm11 bash[23232]: cluster 2026-03-08T23:10:05.694926+0000 mgr.y (mgr.24419) 296 : cluster [DBG] pgmap v198: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:07.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:06 vm11 bash[23232]: cluster 2026-03-08T23:10:05.694926+0000 mgr.y (mgr.24419) 296 : cluster [DBG] pgmap v198: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:08.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:07 vm06 bash[20625]: audit 2026-03-08T23:10:07.901621+0000 mon.c (mon.2) 149 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:08.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:07 vm06 bash[20625]: audit 2026-03-08T23:10:07.901621+0000 mon.c (mon.2) 149 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:08.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:07 vm06 bash[27746]: audit 2026-03-08T23:10:07.901621+0000 mon.c (mon.2) 149 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:08.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:07 vm06 bash[27746]: audit 2026-03-08T23:10:07.901621+0000 mon.c (mon.2) 149 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:08.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:07 vm11 bash[23232]: audit 2026-03-08T23:10:07.901621+0000 mon.c (mon.2) 149 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:08.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:07 vm11 bash[23232]: audit 2026-03-08T23:10:07.901621+0000 mon.c (mon.2) 149 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:08.397 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.6 2026-03-08T23:10:08.576 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== 2026-03-08T23:10:08.577 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== == AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== ']' 2026-03-08T23:10:08.577 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:09.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:08 vm06 bash[20625]: cluster 2026-03-08T23:10:07.695137+0000 mgr.y (mgr.24419) 297 : cluster [DBG] pgmap v199: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:09.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:08 vm06 bash[20625]: cluster 2026-03-08T23:10:07.695137+0000 mgr.y (mgr.24419) 297 : cluster [DBG] pgmap v199: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:09.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:08 vm06 bash[20625]: audit 2026-03-08T23:10:08.569036+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.106:0/1913726224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:09.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:08 vm06 bash[20625]: audit 2026-03-08T23:10:08.569036+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.106:0/1913726224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:08 vm06 bash[27746]: cluster 2026-03-08T23:10:07.695137+0000 mgr.y (mgr.24419) 297 : cluster [DBG] pgmap v199: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:08 vm06 bash[27746]: cluster 2026-03-08T23:10:07.695137+0000 mgr.y (mgr.24419) 297 : cluster [DBG] pgmap v199: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:08 vm06 bash[27746]: audit 2026-03-08T23:10:08.569036+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.106:0/1913726224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:09.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:08 vm06 bash[27746]: audit 2026-03-08T23:10:08.569036+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.106:0/1913726224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:08 vm11 bash[23232]: cluster 2026-03-08T23:10:07.695137+0000 mgr.y (mgr.24419) 297 : cluster [DBG] pgmap v199: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:08 vm11 bash[23232]: cluster 2026-03-08T23:10:07.695137+0000 mgr.y (mgr.24419) 297 : cluster [DBG] pgmap v199: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:08 vm11 bash[23232]: audit 2026-03-08T23:10:08.569036+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.106:0/1913726224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:08 vm11 bash[23232]: audit 2026-03-08T23:10:08.569036+0000 mon.c (mon.2) 150 : audit [INF] from='client.? 192.168.123.106:0/1913726224' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:10:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:10:10] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:10:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:10 vm06 bash[27746]: cluster 2026-03-08T23:10:09.695416+0000 mgr.y (mgr.24419) 298 : cluster [DBG] pgmap v200: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:11.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:10 vm06 bash[27746]: cluster 2026-03-08T23:10:09.695416+0000 mgr.y (mgr.24419) 298 : cluster [DBG] pgmap v200: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:10 vm06 bash[20625]: cluster 2026-03-08T23:10:09.695416+0000 mgr.y (mgr.24419) 298 : cluster [DBG] pgmap v200: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:11.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:10 vm06 bash[20625]: cluster 2026-03-08T23:10:09.695416+0000 mgr.y (mgr.24419) 298 : cluster [DBG] pgmap v200: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:11.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:10 vm11 bash[23232]: cluster 2026-03-08T23:10:09.695416+0000 mgr.y (mgr.24419) 298 : cluster [DBG] pgmap v200: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:11.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:10 vm11 bash[23232]: cluster 2026-03-08T23:10:09.695416+0000 mgr.y (mgr.24419) 298 : cluster [DBG] pgmap v200: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:12.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:10:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:10:13.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:13 vm06 bash[20625]: cluster 2026-03-08T23:10:11.695853+0000 mgr.y (mgr.24419) 299 : cluster [DBG] pgmap v201: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:13.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:13 vm06 bash[20625]: cluster 2026-03-08T23:10:11.695853+0000 mgr.y (mgr.24419) 299 : cluster [DBG] pgmap v201: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:13 vm06 bash[27746]: cluster 2026-03-08T23:10:11.695853+0000 mgr.y (mgr.24419) 299 : cluster [DBG] pgmap v201: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:13.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:13 vm06 bash[27746]: cluster 2026-03-08T23:10:11.695853+0000 mgr.y (mgr.24419) 299 : cluster [DBG] pgmap v201: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:13.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:13 vm11 bash[23232]: cluster 2026-03-08T23:10:11.695853+0000 mgr.y (mgr.24419) 299 : cluster [DBG] pgmap v201: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:13.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:13 vm11 bash[23232]: cluster 2026-03-08T23:10:11.695853+0000 mgr.y (mgr.24419) 299 : cluster [DBG] pgmap v201: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:13.578 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.6 2026-03-08T23:10:13.758 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== 2026-03-08T23:10:13.758 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== == AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== ']' 2026-03-08T23:10:13.758 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:14 vm06 bash[20625]: audit 2026-03-08T23:10:12.380689+0000 mgr.y (mgr.24419) 300 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:14 vm06 bash[20625]: audit 2026-03-08T23:10:12.380689+0000 mgr.y (mgr.24419) 300 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:14 vm06 bash[20625]: audit 2026-03-08T23:10:13.750636+0000 mon.a (mon.0) 904 : audit [INF] from='client.? 192.168.123.106:0/743969241' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:14 vm06 bash[20625]: audit 2026-03-08T23:10:13.750636+0000 mon.a (mon.0) 904 : audit [INF] from='client.? 192.168.123.106:0/743969241' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:14 vm06 bash[27746]: audit 2026-03-08T23:10:12.380689+0000 mgr.y (mgr.24419) 300 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:14 vm06 bash[27746]: audit 2026-03-08T23:10:12.380689+0000 mgr.y (mgr.24419) 300 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:14 vm06 bash[27746]: audit 2026-03-08T23:10:13.750636+0000 mon.a (mon.0) 904 : audit [INF] from='client.? 192.168.123.106:0/743969241' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:14.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:14 vm06 bash[27746]: audit 2026-03-08T23:10:13.750636+0000 mon.a (mon.0) 904 : audit [INF] from='client.? 192.168.123.106:0/743969241' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:14 vm11 bash[23232]: audit 2026-03-08T23:10:12.380689+0000 mgr.y (mgr.24419) 300 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:14 vm11 bash[23232]: audit 2026-03-08T23:10:12.380689+0000 mgr.y (mgr.24419) 300 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:14 vm11 bash[23232]: audit 2026-03-08T23:10:13.750636+0000 mon.a (mon.0) 904 : audit [INF] from='client.? 192.168.123.106:0/743969241' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:14.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:14 vm11 bash[23232]: audit 2026-03-08T23:10:13.750636+0000 mon.a (mon.0) 904 : audit [INF] from='client.? 192.168.123.106:0/743969241' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:15.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:15 vm06 bash[20625]: cluster 2026-03-08T23:10:13.696104+0000 mgr.y (mgr.24419) 301 : cluster [DBG] pgmap v202: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:15.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:15 vm06 bash[20625]: cluster 2026-03-08T23:10:13.696104+0000 mgr.y (mgr.24419) 301 : cluster [DBG] pgmap v202: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:15.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:15 vm06 bash[27746]: cluster 2026-03-08T23:10:13.696104+0000 mgr.y (mgr.24419) 301 : cluster [DBG] pgmap v202: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:15.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:15 vm06 bash[27746]: cluster 2026-03-08T23:10:13.696104+0000 mgr.y (mgr.24419) 301 : cluster [DBG] pgmap v202: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:15.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:15 vm11 bash[23232]: cluster 2026-03-08T23:10:13.696104+0000 mgr.y (mgr.24419) 301 : cluster [DBG] pgmap v202: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:15.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:15 vm11 bash[23232]: cluster 2026-03-08T23:10:13.696104+0000 mgr.y (mgr.24419) 301 : cluster [DBG] pgmap v202: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:17.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:17 vm06 bash[20625]: cluster 2026-03-08T23:10:15.696492+0000 mgr.y (mgr.24419) 302 : cluster [DBG] pgmap v203: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:17.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:17 vm06 bash[20625]: cluster 2026-03-08T23:10:15.696492+0000 mgr.y (mgr.24419) 302 : cluster [DBG] pgmap v203: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:17.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:17 vm06 bash[27746]: cluster 2026-03-08T23:10:15.696492+0000 mgr.y (mgr.24419) 302 : cluster [DBG] pgmap v203: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:17.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:17 vm06 bash[27746]: cluster 2026-03-08T23:10:15.696492+0000 mgr.y (mgr.24419) 302 : cluster [DBG] pgmap v203: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:17.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:17 vm11 bash[23232]: cluster 2026-03-08T23:10:15.696492+0000 mgr.y (mgr.24419) 302 : cluster [DBG] pgmap v203: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:17.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:17 vm11 bash[23232]: cluster 2026-03-08T23:10:15.696492+0000 mgr.y (mgr.24419) 302 : cluster [DBG] pgmap v203: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:18.760 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.6 2026-03-08T23:10:18.957 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDBAa5pMYErAhAAavRLfni56Fn4gYxu2/wjcA== 2026-03-08T23:10:18.957 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQDc/61p0aC0JBAAvMoZLrqBpZLo7HRP13g7BQ== == AQDBAa5pMYErAhAAavRLfni56Fn4gYxu2/wjcA== ']' 2026-03-08T23:10:18.957 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:10:18.957 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for osd.7' 2026-03-08T23:10:18.957 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for osd.7 2026-03-08T23:10:18.957 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:19.138 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:19.138 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:19.138 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key osd.7 2026-03-08T23:10:19.326 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key osd.7 on host 'vm11' 2026-03-08T23:10:19.347 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:19.347 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:19 vm06 bash[20625]: cluster 2026-03-08T23:10:17.696760+0000 mgr.y (mgr.24419) 303 : cluster [DBG] pgmap v204: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:19 vm06 bash[20625]: cluster 2026-03-08T23:10:17.696760+0000 mgr.y (mgr.24419) 303 : cluster [DBG] pgmap v204: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:19 vm06 bash[20625]: audit 2026-03-08T23:10:18.948597+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.106:0/329948950' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:19 vm06 bash[20625]: audit 2026-03-08T23:10:18.948597+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.106:0/329948950' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:19 vm06 bash[20625]: audit 2026-03-08T23:10:19.130215+0000 mon.a (mon.0) 906 : audit [INF] from='client.? 192.168.123.106:0/4254118029' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:19 vm06 bash[20625]: audit 2026-03-08T23:10:19.130215+0000 mon.a (mon.0) 906 : audit [INF] from='client.? 192.168.123.106:0/4254118029' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:19 vm06 bash[27746]: cluster 2026-03-08T23:10:17.696760+0000 mgr.y (mgr.24419) 303 : cluster [DBG] pgmap v204: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:19 vm06 bash[27746]: cluster 2026-03-08T23:10:17.696760+0000 mgr.y (mgr.24419) 303 : cluster [DBG] pgmap v204: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:19 vm06 bash[27746]: audit 2026-03-08T23:10:18.948597+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.106:0/329948950' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:19 vm06 bash[27746]: audit 2026-03-08T23:10:18.948597+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.106:0/329948950' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:19 vm06 bash[27746]: audit 2026-03-08T23:10:19.130215+0000 mon.a (mon.0) 906 : audit [INF] from='client.? 192.168.123.106:0/4254118029' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:19.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:19 vm06 bash[27746]: audit 2026-03-08T23:10:19.130215+0000 mon.a (mon.0) 906 : audit [INF] from='client.? 192.168.123.106:0/4254118029' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:19 vm11 bash[23232]: cluster 2026-03-08T23:10:17.696760+0000 mgr.y (mgr.24419) 303 : cluster [DBG] pgmap v204: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:19 vm11 bash[23232]: cluster 2026-03-08T23:10:17.696760+0000 mgr.y (mgr.24419) 303 : cluster [DBG] pgmap v204: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:19 vm11 bash[23232]: audit 2026-03-08T23:10:18.948597+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.106:0/329948950' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:19 vm11 bash[23232]: audit 2026-03-08T23:10:18.948597+0000 mon.a (mon.0) 905 : audit [INF] from='client.? 192.168.123.106:0/329948950' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.6"}]: dispatch 2026-03-08T23:10:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:19 vm11 bash[23232]: audit 2026-03-08T23:10:19.130215+0000 mon.a (mon.0) 906 : audit [INF] from='client.? 192.168.123.106:0/4254118029' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:19 vm11 bash[23232]: audit 2026-03-08T23:10:19.130215+0000 mon.a (mon.0) 906 : audit [INF] from='client.? 192.168.123.106:0/4254118029' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.291888+0000 mgr.y (mgr.24419) 304 : audit [DBG] from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.7", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.291888+0000 mgr.y (mgr.24419) 304 : audit [DBG] from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.7", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cephadm 2026-03-08T23:10:19.292362+0000 mgr.y (mgr.24419) 305 : cephadm [INF] Schedule rotate-key daemon osd.7 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cephadm 2026-03-08T23:10:19.292362+0000 mgr.y (mgr.24419) 305 : cephadm [INF] Schedule rotate-key daemon osd.7 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.305670+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.305670+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.326164+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.326164+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.327776+0000 mon.c (mon.2) 151 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.327776+0000 mon.c (mon.2) 151 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.639178+0000 mon.c (mon.2) 152 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.639178+0000 mon.c (mon.2) 152 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.640651+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:10:20.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.640651+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.647220+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.647220+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cephadm 2026-03-08T23:10:19.660072+0000 mgr.y (mgr.24419) 306 : cephadm [INF] Rotating authentication key for osd.7 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cephadm 2026-03-08T23:10:19.660072+0000 mgr.y (mgr.24419) 306 : cephadm [INF] Rotating authentication key for osd.7 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.660325+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.660325+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.660848+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.660848+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.663772+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]': finished 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:19.663772+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]': finished 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cephadm 2026-03-08T23:10:19.667554+0000 mgr.y (mgr.24419) 307 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cephadm 2026-03-08T23:10:19.667554+0000 mgr.y (mgr.24419) 307 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cluster 2026-03-08T23:10:19.697062+0000 mgr.y (mgr.24419) 308 : cluster [DBG] pgmap v205: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: cluster 2026-03-08T23:10:19.697062+0000 mgr.y (mgr.24419) 308 : cluster [DBG] pgmap v205: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.044627+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.044627+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.068673+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.068673+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.227613+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.227613+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.235456+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:20 vm11 bash[23232]: audit 2026-03-08T23:10:20.235456+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.291888+0000 mgr.y (mgr.24419) 304 : audit [DBG] from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.7", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.291888+0000 mgr.y (mgr.24419) 304 : audit [DBG] from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.7", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cephadm 2026-03-08T23:10:19.292362+0000 mgr.y (mgr.24419) 305 : cephadm [INF] Schedule rotate-key daemon osd.7 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cephadm 2026-03-08T23:10:19.292362+0000 mgr.y (mgr.24419) 305 : cephadm [INF] Schedule rotate-key daemon osd.7 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.305670+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.305670+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.326164+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.326164+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.327776+0000 mon.c (mon.2) 151 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.327776+0000 mon.c (mon.2) 151 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.639178+0000 mon.c (mon.2) 152 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.639178+0000 mon.c (mon.2) 152 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.640651+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.640651+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.647220+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.647220+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cephadm 2026-03-08T23:10:19.660072+0000 mgr.y (mgr.24419) 306 : cephadm [INF] Rotating authentication key for osd.7 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cephadm 2026-03-08T23:10:19.660072+0000 mgr.y (mgr.24419) 306 : cephadm [INF] Rotating authentication key for osd.7 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.660325+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.660325+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.763 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.660848+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.660848+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.663772+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]': finished 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:19.663772+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]': finished 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cephadm 2026-03-08T23:10:19.667554+0000 mgr.y (mgr.24419) 307 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cephadm 2026-03-08T23:10:19.667554+0000 mgr.y (mgr.24419) 307 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cluster 2026-03-08T23:10:19.697062+0000 mgr.y (mgr.24419) 308 : cluster [DBG] pgmap v205: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: cluster 2026-03-08T23:10:19.697062+0000 mgr.y (mgr.24419) 308 : cluster [DBG] pgmap v205: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.044627+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.044627+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.068673+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.068673+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.227613+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.227613+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.235456+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:20 vm06 bash[20625]: audit 2026-03-08T23:10:20.235456+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.291888+0000 mgr.y (mgr.24419) 304 : audit [DBG] from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.7", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.291888+0000 mgr.y (mgr.24419) 304 : audit [DBG] from='client.15114 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "osd.7", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cephadm 2026-03-08T23:10:19.292362+0000 mgr.y (mgr.24419) 305 : cephadm [INF] Schedule rotate-key daemon osd.7 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cephadm 2026-03-08T23:10:19.292362+0000 mgr.y (mgr.24419) 305 : cephadm [INF] Schedule rotate-key daemon osd.7 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.305670+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.305670+0000 mon.a (mon.0) 907 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.326164+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.326164+0000 mon.a (mon.0) 908 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.327776+0000 mon.c (mon.2) 151 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.327776+0000 mon.c (mon.2) 151 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.639178+0000 mon.c (mon.2) 152 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.639178+0000 mon.c (mon.2) 152 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.640651+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.640651+0000 mon.c (mon.2) 153 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.647220+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.647220+0000 mon.a (mon.0) 909 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cephadm 2026-03-08T23:10:19.660072+0000 mgr.y (mgr.24419) 306 : cephadm [INF] Rotating authentication key for osd.7 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cephadm 2026-03-08T23:10:19.660072+0000 mgr.y (mgr.24419) 306 : cephadm [INF] Rotating authentication key for osd.7 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.660325+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.660325+0000 mon.c (mon.2) 154 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.660848+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.660848+0000 mon.a (mon.0) 910 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]: dispatch 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.663772+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]': finished 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:19.663772+0000 mon.a (mon.0) 911 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "osd.7", "format": "json"}]': finished 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cephadm 2026-03-08T23:10:19.667554+0000 mgr.y (mgr.24419) 307 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cephadm 2026-03-08T23:10:19.667554+0000 mgr.y (mgr.24419) 307 : cephadm [INF] Reconfiguring daemon osd.7 on vm11 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cluster 2026-03-08T23:10:19.697062+0000 mgr.y (mgr.24419) 308 : cluster [DBG] pgmap v205: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: cluster 2026-03-08T23:10:19.697062+0000 mgr.y (mgr.24419) 308 : cluster [DBG] pgmap v205: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.044627+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.044627+0000 mon.a (mon.0) 912 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.068673+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.068673+0000 mon.a (mon.0) 913 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.227613+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.227613+0000 mon.a (mon.0) 914 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.235456+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:20.764 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:20 vm06 bash[27746]: audit 2026-03-08T23:10:20.235456+0000 mon.a (mon.0) 915 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:10:21.028 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:10:20 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:10:20] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:10:22.753 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:10:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:10:23.028 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:22 vm06 bash[20625]: cluster 2026-03-08T23:10:21.697469+0000 mgr.y (mgr.24419) 309 : cluster [DBG] pgmap v206: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:23.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:22 vm06 bash[20625]: cluster 2026-03-08T23:10:21.697469+0000 mgr.y (mgr.24419) 309 : cluster [DBG] pgmap v206: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:22 vm06 bash[27746]: cluster 2026-03-08T23:10:21.697469+0000 mgr.y (mgr.24419) 309 : cluster [DBG] pgmap v206: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:23.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:22 vm06 bash[27746]: cluster 2026-03-08T23:10:21.697469+0000 mgr.y (mgr.24419) 309 : cluster [DBG] pgmap v206: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:22 vm11 bash[23232]: cluster 2026-03-08T23:10:21.697469+0000 mgr.y (mgr.24419) 309 : cluster [DBG] pgmap v206: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:23.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:22 vm11 bash[23232]: cluster 2026-03-08T23:10:21.697469+0000 mgr.y (mgr.24419) 309 : cluster [DBG] pgmap v206: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:24.028 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:23 vm06 bash[20625]: audit 2026-03-08T23:10:22.391434+0000 mgr.y (mgr.24419) 310 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:24.028 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:23 vm06 bash[20625]: audit 2026-03-08T23:10:22.391434+0000 mgr.y (mgr.24419) 310 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:24.028 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:23 vm06 bash[20625]: audit 2026-03-08T23:10:22.907650+0000 mon.c (mon.2) 155 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:24.028 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:23 vm06 bash[20625]: audit 2026-03-08T23:10:22.907650+0000 mon.c (mon.2) 155 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:23 vm06 bash[27746]: audit 2026-03-08T23:10:22.391434+0000 mgr.y (mgr.24419) 310 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:23 vm06 bash[27746]: audit 2026-03-08T23:10:22.391434+0000 mgr.y (mgr.24419) 310 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:23 vm06 bash[27746]: audit 2026-03-08T23:10:22.907650+0000 mon.c (mon.2) 155 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:23 vm06 bash[27746]: audit 2026-03-08T23:10:22.907650+0000 mon.c (mon.2) 155 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:23 vm11 bash[23232]: audit 2026-03-08T23:10:22.391434+0000 mgr.y (mgr.24419) 310 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:23 vm11 bash[23232]: audit 2026-03-08T23:10:22.391434+0000 mgr.y (mgr.24419) 310 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:23 vm11 bash[23232]: audit 2026-03-08T23:10:22.907650+0000 mon.c (mon.2) 155 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:24.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:23 vm11 bash[23232]: audit 2026-03-08T23:10:22.907650+0000 mon.c (mon.2) 155 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:24.346 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:24.525 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:24.525 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:24.525 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:24 vm06 bash[20625]: cluster 2026-03-08T23:10:23.697693+0000 mgr.y (mgr.24419) 311 : cluster [DBG] pgmap v207: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:24 vm06 bash[20625]: cluster 2026-03-08T23:10:23.697693+0000 mgr.y (mgr.24419) 311 : cluster [DBG] pgmap v207: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:24 vm06 bash[20625]: audit 2026-03-08T23:10:24.517275+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.106:0/384898790' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:24 vm06 bash[20625]: audit 2026-03-08T23:10:24.517275+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.106:0/384898790' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:24 vm06 bash[27746]: cluster 2026-03-08T23:10:23.697693+0000 mgr.y (mgr.24419) 311 : cluster [DBG] pgmap v207: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:24 vm06 bash[27746]: cluster 2026-03-08T23:10:23.697693+0000 mgr.y (mgr.24419) 311 : cluster [DBG] pgmap v207: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:24 vm06 bash[27746]: audit 2026-03-08T23:10:24.517275+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.106:0/384898790' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:24 vm06 bash[27746]: audit 2026-03-08T23:10:24.517275+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.106:0/384898790' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:24 vm11 bash[23232]: cluster 2026-03-08T23:10:23.697693+0000 mgr.y (mgr.24419) 311 : cluster [DBG] pgmap v207: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:24 vm11 bash[23232]: cluster 2026-03-08T23:10:23.697693+0000 mgr.y (mgr.24419) 311 : cluster [DBG] pgmap v207: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:24 vm11 bash[23232]: audit 2026-03-08T23:10:24.517275+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.106:0/384898790' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:25.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:24 vm11 bash[23232]: audit 2026-03-08T23:10:24.517275+0000 mon.c (mon.2) 156 : audit [INF] from='client.? 192.168.123.106:0/384898790' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:27.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:26 vm06 bash[20625]: cluster 2026-03-08T23:10:25.698094+0000 mgr.y (mgr.24419) 312 : cluster [DBG] pgmap v208: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:27.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:26 vm06 bash[20625]: cluster 2026-03-08T23:10:25.698094+0000 mgr.y (mgr.24419) 312 : cluster [DBG] pgmap v208: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:27.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:26 vm06 bash[27746]: cluster 2026-03-08T23:10:25.698094+0000 mgr.y (mgr.24419) 312 : cluster [DBG] pgmap v208: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:27.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:26 vm06 bash[27746]: cluster 2026-03-08T23:10:25.698094+0000 mgr.y (mgr.24419) 312 : cluster [DBG] pgmap v208: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:27.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:26 vm11 bash[23232]: cluster 2026-03-08T23:10:25.698094+0000 mgr.y (mgr.24419) 312 : cluster [DBG] pgmap v208: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:27.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:26 vm11 bash[23232]: cluster 2026-03-08T23:10:25.698094+0000 mgr.y (mgr.24419) 312 : cluster [DBG] pgmap v208: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:29.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:28 vm11 bash[23232]: cluster 2026-03-08T23:10:27.698396+0000 mgr.y (mgr.24419) 313 : cluster [DBG] pgmap v209: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:29.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:28 vm11 bash[23232]: cluster 2026-03-08T23:10:27.698396+0000 mgr.y (mgr.24419) 313 : cluster [DBG] pgmap v209: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:28 vm06 bash[20625]: cluster 2026-03-08T23:10:27.698396+0000 mgr.y (mgr.24419) 313 : cluster [DBG] pgmap v209: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:29.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:28 vm06 bash[20625]: cluster 2026-03-08T23:10:27.698396+0000 mgr.y (mgr.24419) 313 : cluster [DBG] pgmap v209: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:28 vm06 bash[27746]: cluster 2026-03-08T23:10:27.698396+0000 mgr.y (mgr.24419) 313 : cluster [DBG] pgmap v209: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:29.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:28 vm06 bash[27746]: cluster 2026-03-08T23:10:27.698396+0000 mgr.y (mgr.24419) 313 : cluster [DBG] pgmap v209: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:29.527 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:29.714 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:29.715 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:29.715 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:30.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:29 vm11 bash[23232]: audit 2026-03-08T23:10:29.706960+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.106:0/453967935' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:30.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:29 vm11 bash[23232]: audit 2026-03-08T23:10:29.706960+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.106:0/453967935' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:30.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:29 vm06 bash[20625]: audit 2026-03-08T23:10:29.706960+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.106:0/453967935' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:30.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:29 vm06 bash[20625]: audit 2026-03-08T23:10:29.706960+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.106:0/453967935' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:30.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:29 vm06 bash[27746]: audit 2026-03-08T23:10:29.706960+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.106:0/453967935' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:30.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:29 vm06 bash[27746]: audit 2026-03-08T23:10:29.706960+0000 mon.c (mon.2) 157 : audit [INF] from='client.? 192.168.123.106:0/453967935' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:30 vm06 bash[20625]: cluster 2026-03-08T23:10:29.698736+0000 mgr.y (mgr.24419) 314 : cluster [DBG] pgmap v210: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:31.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:30 vm06 bash[20625]: cluster 2026-03-08T23:10:29.698736+0000 mgr.y (mgr.24419) 314 : cluster [DBG] pgmap v210: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:31.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:10:30 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:10:30] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:10:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:30 vm06 bash[27746]: cluster 2026-03-08T23:10:29.698736+0000 mgr.y (mgr.24419) 314 : cluster [DBG] pgmap v210: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:31.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:30 vm06 bash[27746]: cluster 2026-03-08T23:10:29.698736+0000 mgr.y (mgr.24419) 314 : cluster [DBG] pgmap v210: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:31.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:30 vm11 bash[23232]: cluster 2026-03-08T23:10:29.698736+0000 mgr.y (mgr.24419) 314 : cluster [DBG] pgmap v210: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:31.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:30 vm11 bash[23232]: cluster 2026-03-08T23:10:29.698736+0000 mgr.y (mgr.24419) 314 : cluster [DBG] pgmap v210: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:32.797 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:10:32 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:10:33.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:32 vm11 bash[23232]: cluster 2026-03-08T23:10:31.699119+0000 mgr.y (mgr.24419) 315 : cluster [DBG] pgmap v211: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:33.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:32 vm11 bash[23232]: cluster 2026-03-08T23:10:31.699119+0000 mgr.y (mgr.24419) 315 : cluster [DBG] pgmap v211: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:32 vm06 bash[20625]: cluster 2026-03-08T23:10:31.699119+0000 mgr.y (mgr.24419) 315 : cluster [DBG] pgmap v211: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:33.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:32 vm06 bash[20625]: cluster 2026-03-08T23:10:31.699119+0000 mgr.y (mgr.24419) 315 : cluster [DBG] pgmap v211: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:32 vm06 bash[27746]: cluster 2026-03-08T23:10:31.699119+0000 mgr.y (mgr.24419) 315 : cluster [DBG] pgmap v211: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:33.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:32 vm06 bash[27746]: cluster 2026-03-08T23:10:31.699119+0000 mgr.y (mgr.24419) 315 : cluster [DBG] pgmap v211: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:34.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:33 vm06 bash[20625]: audit 2026-03-08T23:10:32.402181+0000 mgr.y (mgr.24419) 316 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:34.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:33 vm06 bash[20625]: audit 2026-03-08T23:10:32.402181+0000 mgr.y (mgr.24419) 316 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:34.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:33 vm06 bash[27746]: audit 2026-03-08T23:10:32.402181+0000 mgr.y (mgr.24419) 316 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:34.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:33 vm06 bash[27746]: audit 2026-03-08T23:10:32.402181+0000 mgr.y (mgr.24419) 316 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:33 vm11 bash[23232]: audit 2026-03-08T23:10:32.402181+0000 mgr.y (mgr.24419) 316 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:34.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:33 vm11 bash[23232]: audit 2026-03-08T23:10:32.402181+0000 mgr.y (mgr.24419) 316 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:34.716 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:34.898 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:34.898 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:34.898 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:34 vm06 bash[20625]: cluster 2026-03-08T23:10:33.699373+0000 mgr.y (mgr.24419) 317 : cluster [DBG] pgmap v212: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:35.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:34 vm06 bash[20625]: cluster 2026-03-08T23:10:33.699373+0000 mgr.y (mgr.24419) 317 : cluster [DBG] pgmap v212: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:34 vm06 bash[27746]: cluster 2026-03-08T23:10:33.699373+0000 mgr.y (mgr.24419) 317 : cluster [DBG] pgmap v212: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:35.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:34 vm06 bash[27746]: cluster 2026-03-08T23:10:33.699373+0000 mgr.y (mgr.24419) 317 : cluster [DBG] pgmap v212: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:35.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:34 vm11 bash[23232]: cluster 2026-03-08T23:10:33.699373+0000 mgr.y (mgr.24419) 317 : cluster [DBG] pgmap v212: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:35.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:34 vm11 bash[23232]: cluster 2026-03-08T23:10:33.699373+0000 mgr.y (mgr.24419) 317 : cluster [DBG] pgmap v212: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:36.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:35 vm06 bash[20625]: audit 2026-03-08T23:10:34.890608+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.106:0/2756015032' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:36.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:35 vm06 bash[20625]: audit 2026-03-08T23:10:34.890608+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.106:0/2756015032' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:36.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:35 vm06 bash[27746]: audit 2026-03-08T23:10:34.890608+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.106:0/2756015032' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:36.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:35 vm06 bash[27746]: audit 2026-03-08T23:10:34.890608+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.106:0/2756015032' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:36.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:35 vm11 bash[23232]: audit 2026-03-08T23:10:34.890608+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.106:0/2756015032' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:36.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:35 vm11 bash[23232]: audit 2026-03-08T23:10:34.890608+0000 mon.c (mon.2) 158 : audit [INF] from='client.? 192.168.123.106:0/2756015032' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:36 vm06 bash[20625]: cluster 2026-03-08T23:10:35.699769+0000 mgr.y (mgr.24419) 318 : cluster [DBG] pgmap v213: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:37.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:36 vm06 bash[20625]: cluster 2026-03-08T23:10:35.699769+0000 mgr.y (mgr.24419) 318 : cluster [DBG] pgmap v213: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:36 vm06 bash[27746]: cluster 2026-03-08T23:10:35.699769+0000 mgr.y (mgr.24419) 318 : cluster [DBG] pgmap v213: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:37.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:36 vm06 bash[27746]: cluster 2026-03-08T23:10:35.699769+0000 mgr.y (mgr.24419) 318 : cluster [DBG] pgmap v213: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:37.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:36 vm11 bash[23232]: cluster 2026-03-08T23:10:35.699769+0000 mgr.y (mgr.24419) 318 : cluster [DBG] pgmap v213: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:37.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:36 vm11 bash[23232]: cluster 2026-03-08T23:10:35.699769+0000 mgr.y (mgr.24419) 318 : cluster [DBG] pgmap v213: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:38 vm06 bash[20625]: cluster 2026-03-08T23:10:37.700024+0000 mgr.y (mgr.24419) 319 : cluster [DBG] pgmap v214: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:38 vm06 bash[20625]: cluster 2026-03-08T23:10:37.700024+0000 mgr.y (mgr.24419) 319 : cluster [DBG] pgmap v214: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:38 vm06 bash[20625]: audit 2026-03-08T23:10:37.914458+0000 mon.c (mon.2) 159 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:38 vm06 bash[20625]: audit 2026-03-08T23:10:37.914458+0000 mon.c (mon.2) 159 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:38 vm06 bash[27746]: cluster 2026-03-08T23:10:37.700024+0000 mgr.y (mgr.24419) 319 : cluster [DBG] pgmap v214: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:38 vm06 bash[27746]: cluster 2026-03-08T23:10:37.700024+0000 mgr.y (mgr.24419) 319 : cluster [DBG] pgmap v214: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:38 vm06 bash[27746]: audit 2026-03-08T23:10:37.914458+0000 mon.c (mon.2) 159 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:39.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:38 vm06 bash[27746]: audit 2026-03-08T23:10:37.914458+0000 mon.c (mon.2) 159 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:38 vm11 bash[23232]: cluster 2026-03-08T23:10:37.700024+0000 mgr.y (mgr.24419) 319 : cluster [DBG] pgmap v214: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:38 vm11 bash[23232]: cluster 2026-03-08T23:10:37.700024+0000 mgr.y (mgr.24419) 319 : cluster [DBG] pgmap v214: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:38 vm11 bash[23232]: audit 2026-03-08T23:10:37.914458+0000 mon.c (mon.2) 159 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:39.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:38 vm11 bash[23232]: audit 2026-03-08T23:10:37.914458+0000 mon.c (mon.2) 159 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:39.900 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:40.082 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:40.082 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:40.082 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:10:40 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:10:40] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:40 vm06 bash[27746]: cluster 2026-03-08T23:10:39.700382+0000 mgr.y (mgr.24419) 320 : cluster [DBG] pgmap v215: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:40 vm06 bash[27746]: cluster 2026-03-08T23:10:39.700382+0000 mgr.y (mgr.24419) 320 : cluster [DBG] pgmap v215: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:40 vm06 bash[27746]: audit 2026-03-08T23:10:40.074910+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.106:0/2930642985' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:40 vm06 bash[27746]: audit 2026-03-08T23:10:40.074910+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.106:0/2930642985' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:40 vm06 bash[20625]: cluster 2026-03-08T23:10:39.700382+0000 mgr.y (mgr.24419) 320 : cluster [DBG] pgmap v215: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:40 vm06 bash[20625]: cluster 2026-03-08T23:10:39.700382+0000 mgr.y (mgr.24419) 320 : cluster [DBG] pgmap v215: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:40 vm06 bash[20625]: audit 2026-03-08T23:10:40.074910+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.106:0/2930642985' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:41.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:40 vm06 bash[20625]: audit 2026-03-08T23:10:40.074910+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.106:0/2930642985' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:40 vm11 bash[23232]: cluster 2026-03-08T23:10:39.700382+0000 mgr.y (mgr.24419) 320 : cluster [DBG] pgmap v215: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:40 vm11 bash[23232]: cluster 2026-03-08T23:10:39.700382+0000 mgr.y (mgr.24419) 320 : cluster [DBG] pgmap v215: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:40 vm11 bash[23232]: audit 2026-03-08T23:10:40.074910+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.106:0/2930642985' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:41.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:40 vm11 bash[23232]: audit 2026-03-08T23:10:40.074910+0000 mon.c (mon.2) 160 : audit [INF] from='client.? 192.168.123.106:0/2930642985' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:42.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:10:42 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:10:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:42 vm06 bash[20625]: cluster 2026-03-08T23:10:41.700840+0000 mgr.y (mgr.24419) 321 : cluster [DBG] pgmap v216: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:43.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:42 vm06 bash[20625]: cluster 2026-03-08T23:10:41.700840+0000 mgr.y (mgr.24419) 321 : cluster [DBG] pgmap v216: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:42 vm06 bash[27746]: cluster 2026-03-08T23:10:41.700840+0000 mgr.y (mgr.24419) 321 : cluster [DBG] pgmap v216: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:43.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:42 vm06 bash[27746]: cluster 2026-03-08T23:10:41.700840+0000 mgr.y (mgr.24419) 321 : cluster [DBG] pgmap v216: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:42 vm11 bash[23232]: cluster 2026-03-08T23:10:41.700840+0000 mgr.y (mgr.24419) 321 : cluster [DBG] pgmap v216: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:43.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:42 vm11 bash[23232]: cluster 2026-03-08T23:10:41.700840+0000 mgr.y (mgr.24419) 321 : cluster [DBG] pgmap v216: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:44.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:43 vm06 bash[20625]: audit 2026-03-08T23:10:42.413046+0000 mgr.y (mgr.24419) 322 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:44.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:43 vm06 bash[20625]: audit 2026-03-08T23:10:42.413046+0000 mgr.y (mgr.24419) 322 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:44.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:43 vm06 bash[27746]: audit 2026-03-08T23:10:42.413046+0000 mgr.y (mgr.24419) 322 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:44.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:43 vm06 bash[27746]: audit 2026-03-08T23:10:42.413046+0000 mgr.y (mgr.24419) 322 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:44.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:43 vm11 bash[23232]: audit 2026-03-08T23:10:42.413046+0000 mgr.y (mgr.24419) 322 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:44.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:43 vm11 bash[23232]: audit 2026-03-08T23:10:42.413046+0000 mgr.y (mgr.24419) 322 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:45.085 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:45.267 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:45.267 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:45.267 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:45.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:44 vm06 bash[20625]: cluster 2026-03-08T23:10:43.701066+0000 mgr.y (mgr.24419) 323 : cluster [DBG] pgmap v217: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:45.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:44 vm06 bash[20625]: cluster 2026-03-08T23:10:43.701066+0000 mgr.y (mgr.24419) 323 : cluster [DBG] pgmap v217: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:45.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:44 vm06 bash[27746]: cluster 2026-03-08T23:10:43.701066+0000 mgr.y (mgr.24419) 323 : cluster [DBG] pgmap v217: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:45.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:44 vm06 bash[27746]: cluster 2026-03-08T23:10:43.701066+0000 mgr.y (mgr.24419) 323 : cluster [DBG] pgmap v217: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:44 vm11 bash[23232]: cluster 2026-03-08T23:10:43.701066+0000 mgr.y (mgr.24419) 323 : cluster [DBG] pgmap v217: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:45.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:44 vm11 bash[23232]: cluster 2026-03-08T23:10:43.701066+0000 mgr.y (mgr.24419) 323 : cluster [DBG] pgmap v217: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 767 B/s rd, 0 op/s 2026-03-08T23:10:46.278 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:45 vm06 bash[20625]: audit 2026-03-08T23:10:45.255826+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.106:0/2713202403' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:46.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:45 vm06 bash[20625]: audit 2026-03-08T23:10:45.255826+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.106:0/2713202403' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:46.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:45 vm06 bash[27746]: audit 2026-03-08T23:10:45.255826+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.106:0/2713202403' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:46.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:45 vm06 bash[27746]: audit 2026-03-08T23:10:45.255826+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.106:0/2713202403' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:46.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:45 vm11 bash[23232]: audit 2026-03-08T23:10:45.255826+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.106:0/2713202403' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:46.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:45 vm11 bash[23232]: audit 2026-03-08T23:10:45.255826+0000 mon.b (mon.1) 48 : audit [INF] from='client.? 192.168.123.106:0/2713202403' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:46 vm06 bash[20625]: cluster 2026-03-08T23:10:45.701457+0000 mgr.y (mgr.24419) 324 : cluster [DBG] pgmap v218: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:47.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:46 vm06 bash[20625]: cluster 2026-03-08T23:10:45.701457+0000 mgr.y (mgr.24419) 324 : cluster [DBG] pgmap v218: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:46 vm06 bash[27746]: cluster 2026-03-08T23:10:45.701457+0000 mgr.y (mgr.24419) 324 : cluster [DBG] pgmap v218: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:47.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:46 vm06 bash[27746]: cluster 2026-03-08T23:10:45.701457+0000 mgr.y (mgr.24419) 324 : cluster [DBG] pgmap v218: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:46 vm11 bash[23232]: cluster 2026-03-08T23:10:45.701457+0000 mgr.y (mgr.24419) 324 : cluster [DBG] pgmap v218: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:47.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:46 vm11 bash[23232]: cluster 2026-03-08T23:10:45.701457+0000 mgr.y (mgr.24419) 324 : cluster [DBG] pgmap v218: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:49.528 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:49 vm06 bash[20625]: cluster 2026-03-08T23:10:47.701673+0000 mgr.y (mgr.24419) 325 : cluster [DBG] pgmap v219: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:49.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:49 vm06 bash[20625]: cluster 2026-03-08T23:10:47.701673+0000 mgr.y (mgr.24419) 325 : cluster [DBG] pgmap v219: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:49 vm06 bash[27746]: cluster 2026-03-08T23:10:47.701673+0000 mgr.y (mgr.24419) 325 : cluster [DBG] pgmap v219: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:49.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:49 vm06 bash[27746]: cluster 2026-03-08T23:10:47.701673+0000 mgr.y (mgr.24419) 325 : cluster [DBG] pgmap v219: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:49 vm11 bash[23232]: cluster 2026-03-08T23:10:47.701673+0000 mgr.y (mgr.24419) 325 : cluster [DBG] pgmap v219: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:49.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:49 vm11 bash[23232]: cluster 2026-03-08T23:10:47.701673+0000 mgr.y (mgr.24419) 325 : cluster [DBG] pgmap v219: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:50.268 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:50.453 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:50.453 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:50.453 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:51.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:10:50 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:10:50] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:51 vm06 bash[20625]: cluster 2026-03-08T23:10:49.701941+0000 mgr.y (mgr.24419) 326 : cluster [DBG] pgmap v220: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:51 vm06 bash[20625]: cluster 2026-03-08T23:10:49.701941+0000 mgr.y (mgr.24419) 326 : cluster [DBG] pgmap v220: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:51 vm06 bash[20625]: audit 2026-03-08T23:10:50.445520+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.106:0/4067799421' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:51 vm06 bash[20625]: audit 2026-03-08T23:10:50.445520+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.106:0/4067799421' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:51 vm06 bash[27746]: cluster 2026-03-08T23:10:49.701941+0000 mgr.y (mgr.24419) 326 : cluster [DBG] pgmap v220: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:51 vm06 bash[27746]: cluster 2026-03-08T23:10:49.701941+0000 mgr.y (mgr.24419) 326 : cluster [DBG] pgmap v220: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:51 vm06 bash[27746]: audit 2026-03-08T23:10:50.445520+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.106:0/4067799421' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:51.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:51 vm06 bash[27746]: audit 2026-03-08T23:10:50.445520+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.106:0/4067799421' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:51 vm11 bash[23232]: cluster 2026-03-08T23:10:49.701941+0000 mgr.y (mgr.24419) 326 : cluster [DBG] pgmap v220: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:51 vm11 bash[23232]: cluster 2026-03-08T23:10:49.701941+0000 mgr.y (mgr.24419) 326 : cluster [DBG] pgmap v220: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:51 vm11 bash[23232]: audit 2026-03-08T23:10:50.445520+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.106:0/4067799421' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:51.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:51 vm11 bash[23232]: audit 2026-03-08T23:10:50.445520+0000 mon.c (mon.2) 161 : audit [INF] from='client.? 192.168.123.106:0/4067799421' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:52.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:10:52 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:53 vm06 bash[20625]: cluster 2026-03-08T23:10:51.702327+0000 mgr.y (mgr.24419) 327 : cluster [DBG] pgmap v221: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:53 vm06 bash[20625]: cluster 2026-03-08T23:10:51.702327+0000 mgr.y (mgr.24419) 327 : cluster [DBG] pgmap v221: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:53 vm06 bash[20625]: audit 2026-03-08T23:10:52.920132+0000 mon.c (mon.2) 162 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:53 vm06 bash[20625]: audit 2026-03-08T23:10:52.920132+0000 mon.c (mon.2) 162 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:53 vm06 bash[27746]: cluster 2026-03-08T23:10:51.702327+0000 mgr.y (mgr.24419) 327 : cluster [DBG] pgmap v221: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:53 vm06 bash[27746]: cluster 2026-03-08T23:10:51.702327+0000 mgr.y (mgr.24419) 327 : cluster [DBG] pgmap v221: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:53 vm06 bash[27746]: audit 2026-03-08T23:10:52.920132+0000 mon.c (mon.2) 162 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:53.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:53 vm06 bash[27746]: audit 2026-03-08T23:10:52.920132+0000 mon.c (mon.2) 162 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:53 vm11 bash[23232]: cluster 2026-03-08T23:10:51.702327+0000 mgr.y (mgr.24419) 327 : cluster [DBG] pgmap v221: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:53 vm11 bash[23232]: cluster 2026-03-08T23:10:51.702327+0000 mgr.y (mgr.24419) 327 : cluster [DBG] pgmap v221: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:53 vm11 bash[23232]: audit 2026-03-08T23:10:52.920132+0000 mon.c (mon.2) 162 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:53.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:53 vm11 bash[23232]: audit 2026-03-08T23:10:52.920132+0000 mon.c (mon.2) 162 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:10:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:54 vm06 bash[20625]: audit 2026-03-08T23:10:52.418432+0000 mgr.y (mgr.24419) 328 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:54.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:54 vm06 bash[20625]: audit 2026-03-08T23:10:52.418432+0000 mgr.y (mgr.24419) 328 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:54 vm06 bash[27746]: audit 2026-03-08T23:10:52.418432+0000 mgr.y (mgr.24419) 328 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:54.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:54 vm06 bash[27746]: audit 2026-03-08T23:10:52.418432+0000 mgr.y (mgr.24419) 328 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:54 vm11 bash[23232]: audit 2026-03-08T23:10:52.418432+0000 mgr.y (mgr.24419) 328 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:54.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:54 vm11 bash[23232]: audit 2026-03-08T23:10:52.418432+0000 mgr.y (mgr.24419) 328 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:10:55.455 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:10:55.528 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:55 vm06 bash[20625]: cluster 2026-03-08T23:10:53.702553+0000 mgr.y (mgr.24419) 329 : cluster [DBG] pgmap v222: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:55.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:55 vm06 bash[20625]: cluster 2026-03-08T23:10:53.702553+0000 mgr.y (mgr.24419) 329 : cluster [DBG] pgmap v222: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:55 vm06 bash[27746]: cluster 2026-03-08T23:10:53.702553+0000 mgr.y (mgr.24419) 329 : cluster [DBG] pgmap v222: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:55.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:55 vm06 bash[27746]: cluster 2026-03-08T23:10:53.702553+0000 mgr.y (mgr.24419) 329 : cluster [DBG] pgmap v222: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:55 vm11 bash[23232]: cluster 2026-03-08T23:10:53.702553+0000 mgr.y (mgr.24419) 329 : cluster [DBG] pgmap v222: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:55.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:55 vm11 bash[23232]: cluster 2026-03-08T23:10:53.702553+0000 mgr.y (mgr.24419) 329 : cluster [DBG] pgmap v222: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:55.645 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== 2026-03-08T23:10:55.645 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== ']' 2026-03-08T23:10:55.645 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:10:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:56 vm06 bash[20625]: audit 2026-03-08T23:10:55.637607+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.106:0/1634530864' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:56.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:56 vm06 bash[20625]: audit 2026-03-08T23:10:55.637607+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.106:0/1634530864' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:56 vm06 bash[27746]: audit 2026-03-08T23:10:55.637607+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.106:0/1634530864' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:56.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:56 vm06 bash[27746]: audit 2026-03-08T23:10:55.637607+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.106:0/1634530864' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:56 vm11 bash[23232]: audit 2026-03-08T23:10:55.637607+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.106:0/1634530864' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:56.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:56 vm11 bash[23232]: audit 2026-03-08T23:10:55.637607+0000 mon.a (mon.0) 916 : audit [INF] from='client.? 192.168.123.106:0/1634530864' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:10:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:57 vm06 bash[20625]: cluster 2026-03-08T23:10:55.703025+0000 mgr.y (mgr.24419) 330 : cluster [DBG] pgmap v223: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:57.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:57 vm06 bash[20625]: cluster 2026-03-08T23:10:55.703025+0000 mgr.y (mgr.24419) 330 : cluster [DBG] pgmap v223: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:57.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:57 vm06 bash[27746]: cluster 2026-03-08T23:10:55.703025+0000 mgr.y (mgr.24419) 330 : cluster [DBG] pgmap v223: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:57.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:57 vm06 bash[27746]: cluster 2026-03-08T23:10:55.703025+0000 mgr.y (mgr.24419) 330 : cluster [DBG] pgmap v223: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:57.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:57 vm11 bash[23232]: cluster 2026-03-08T23:10:55.703025+0000 mgr.y (mgr.24419) 330 : cluster [DBG] pgmap v223: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:57.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:57 vm11 bash[23232]: cluster 2026-03-08T23:10:55.703025+0000 mgr.y (mgr.24419) 330 : cluster [DBG] pgmap v223: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 1.2 KiB/s rd, 1 op/s 2026-03-08T23:10:59.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:59 vm06 bash[20625]: cluster 2026-03-08T23:10:57.703359+0000 mgr.y (mgr.24419) 331 : cluster [DBG] pgmap v224: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:59.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:10:59 vm06 bash[20625]: cluster 2026-03-08T23:10:57.703359+0000 mgr.y (mgr.24419) 331 : cluster [DBG] pgmap v224: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:59.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:59 vm06 bash[27746]: cluster 2026-03-08T23:10:57.703359+0000 mgr.y (mgr.24419) 331 : cluster [DBG] pgmap v224: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:59.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:10:59 vm06 bash[27746]: cluster 2026-03-08T23:10:57.703359+0000 mgr.y (mgr.24419) 331 : cluster [DBG] pgmap v224: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:59.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:59 vm11 bash[23232]: cluster 2026-03-08T23:10:57.703359+0000 mgr.y (mgr.24419) 331 : cluster [DBG] pgmap v224: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:10:59.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:10:59 vm11 bash[23232]: cluster 2026-03-08T23:10:57.703359+0000 mgr.y (mgr.24419) 331 : cluster [DBG] pgmap v224: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:00.646 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key osd.7 2026-03-08T23:11:00.828 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQDbAa5pcntkJxAAFKUT/x+1LjREiIKy4PQqWA== 2026-03-08T23:11:00.828 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD+/61puSYrJxAAO3gSAYSeaX529oktDoTwvg== == AQDbAa5pcntkJxAAFKUT/x+1LjREiIKy4PQqWA== ']' 2026-03-08T23:11:00.828 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:11:00.828 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for mgr.y' 2026-03-08T23:11:00.828 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for mgr.y 2026-03-08T23:11:00.828 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key mgr.y 2026-03-08T23:11:01.006 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQCw/q1ptko8LBAAbRDcBdFGfF7luVs55STIDw== 2026-03-08T23:11:01.006 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQCw/q1ptko8LBAAbRDcBdFGfF7luVs55STIDw== 2026-03-08T23:11:01.006 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key mgr.y 2026-03-08T23:11:01.028 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:00 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:11:00] "GET /metrics HTTP/1.1" 503 1621 "" "Prometheus/2.51.0" 2026-03-08T23:11:01.163 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key mgr.y on host 'vm06' 2026-03-08T23:11:01.178 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCw/q1ptko8LBAAbRDcBdFGfF7luVs55STIDw== == AQCw/q1ptko8LBAAbRDcBdFGfF7luVs55STIDw== ']' 2026-03-08T23:11:01.178 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: cluster 2026-03-08T23:10:59.703670+0000 mgr.y (mgr.24419) 332 : cluster [DBG] pgmap v225: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: cluster 2026-03-08T23:10:59.703670+0000 mgr.y (mgr.24419) 332 : cluster [DBG] pgmap v225: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:00.820437+0000 mon.a (mon.0) 917 : audit [INF] from='client.? 192.168.123.106:0/3467302611' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:00.820437+0000 mon.a (mon.0) 917 : audit [INF] from='client.? 192.168.123.106:0/3467302611' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:00.997891+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.106:0/1229023522' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:00.997891+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.106:0/1229023522' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.156301+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.156301+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.163482+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.163482+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.164702+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.164702+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.166378+0000 mon.c (mon.2) 164 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.166378+0000 mon.c (mon.2) 164 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.167100+0000 mon.c (mon.2) 165 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.167100+0000 mon.c (mon.2) 165 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.175693+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.175693+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.187925+0000 mon.c (mon.2) 166 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.187925+0000 mon.c (mon.2) 166 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.188248+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.188248+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.191045+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.191045+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.195276+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.195276+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.195508+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.195508+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.196345+0000 mon.c (mon.2) 168 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.196345+0000 mon.c (mon.2) 168 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.197211+0000 mon.c (mon.2) 169 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.500 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:01 vm06 bash[20625]: audit 2026-03-08T23:11:01.197211+0000 mon.c (mon.2) 169 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: cluster 2026-03-08T23:10:59.703670+0000 mgr.y (mgr.24419) 332 : cluster [DBG] pgmap v225: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: cluster 2026-03-08T23:10:59.703670+0000 mgr.y (mgr.24419) 332 : cluster [DBG] pgmap v225: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:00.820437+0000 mon.a (mon.0) 917 : audit [INF] from='client.? 192.168.123.106:0/3467302611' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:00.820437+0000 mon.a (mon.0) 917 : audit [INF] from='client.? 192.168.123.106:0/3467302611' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:00.997891+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.106:0/1229023522' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:00.997891+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.106:0/1229023522' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.156301+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.156301+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.163482+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.163482+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.164702+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.164702+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.166378+0000 mon.c (mon.2) 164 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.166378+0000 mon.c (mon.2) 164 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.167100+0000 mon.c (mon.2) 165 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.167100+0000 mon.c (mon.2) 165 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.175693+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.175693+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.187925+0000 mon.c (mon.2) 166 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.187925+0000 mon.c (mon.2) 166 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.188248+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.188248+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.191045+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.191045+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.195276+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.195276+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.195508+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.195508+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.196345+0000 mon.c (mon.2) 168 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.196345+0000 mon.c (mon.2) 168 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.197211+0000 mon.c (mon.2) 169 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.501 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:01 vm06 bash[27746]: audit 2026-03-08T23:11:01.197211+0000 mon.c (mon.2) 169 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: cluster 2026-03-08T23:10:59.703670+0000 mgr.y (mgr.24419) 332 : cluster [DBG] pgmap v225: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: cluster 2026-03-08T23:10:59.703670+0000 mgr.y (mgr.24419) 332 : cluster [DBG] pgmap v225: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 853 B/s rd, 0 op/s 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:00.820437+0000 mon.a (mon.0) 917 : audit [INF] from='client.? 192.168.123.106:0/3467302611' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:00.820437+0000 mon.a (mon.0) 917 : audit [INF] from='client.? 192.168.123.106:0/3467302611' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "osd.7"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:00.997891+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.106:0/1229023522' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:00.997891+0000 mon.b (mon.1) 49 : audit [INF] from='client.? 192.168.123.106:0/1229023522' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.156301+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.156301+0000 mon.a (mon.0) 918 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.163482+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.163482+0000 mon.a (mon.0) 919 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.164702+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.164702+0000 mon.c (mon.2) 163 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:01.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.166378+0000 mon.c (mon.2) 164 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.166378+0000 mon.c (mon.2) 164 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.167100+0000 mon.c (mon.2) 165 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.167100+0000 mon.c (mon.2) 165 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.175693+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.175693+0000 mon.a (mon.0) 920 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.187925+0000 mon.c (mon.2) 166 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.187925+0000 mon.c (mon.2) 166 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.188248+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.188248+0000 mon.a (mon.0) 921 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.191045+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.191045+0000 mon.a (mon.0) 922 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.195276+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.195276+0000 mon.c (mon.2) 167 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.195508+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.195508+0000 mon.a (mon.0) 923 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.196345+0000 mon.c (mon.2) 168 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.196345+0000 mon.c (mon.2) 168 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.197211+0000 mon.c (mon.2) 169 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:01 vm11 bash[23232]: audit 2026-03-08T23:11:01.197211+0000 mon.c (mon.2) 169 : audit [DBG] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: debug 2026-03-08T23:11:01.655+0000 7f756812b640 -1 mgr.server reply reply (13) Permission denied access denied: does your client key have mgr caps? See http://docs.ceph.com/en/latest/mgr/administrator/#client-authentication 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: debug 2026-03-08T23:11:01.655+0000 7f755908d640 -1 log_channel(cephadm) log [ERR] : Non-zero return from ['ceph', '-k', '/var/lib/ceph/mgr/ceph-y/keyring', '-n', 'mgr.y', 'tell', 'mgr.y', 'rotate-key', '-i', '-']: 2026-03-08T23:11:01.647+0000 7f850d3c9640 1 Processor -- start 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f850d3c9640 1 -- start start 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f850d3c9640 1 --2- >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 0x7f850810f990 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f850d3c9640 1 -- --> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] -- mon_getmap magic: 0 -- 0x7f850810ff60 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8506ffd640 1 --2- >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 0x7f850810f990 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8506ffd640 1 --2- >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 0x7f850810f990 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.106:3300/0 says I am v2:192.168.123.106:38022/0 (socket says 192.168.123.106:38022) 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8506ffd640 1 -- 192.168.123.106:0/674955736 learned_addr learned my addr 192.168.123.106:0/674955736 (peer_addr_for_me v2:192.168.123.106:0/0) 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8506ffd640 1 -- 192.168.123.106:0/674955736 --> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f8508110640 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8506ffd640 1 --2- 192.168.123.106:0/674955736 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 0x7f850810f990 secure :-1 s=READY pgs=266 cs=0 l=1 rev1=1 crypto rx=0x7f84f0009920 tx=0x7f84f002ef20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=360fa0729f9fb359 server_cookie=0 in_seq=0 out_seq=0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8505ffb640 1 -- 192.168.123.106:0/674955736 <== mon.0 v2:192.168.123.106:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f84f003c070 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8505ffb640 1 -- 192.168.123.106:0/674955736 --> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] -- auth(proto 2 2 bytes epoch 0) -- 0x7f84f4003c20 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8505ffb640 1 -- 192.168.123.106:0/674955736 <== mon.0 v2:192.168.123.106:3300/0 2 ==== config(39 keys) ==== 1702+0+0 (secure 0 0 0) 0x7f84f002fbb0 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.647+0000 7f8505ffb640 1 -- 192.168.123.106:0/674955736 <== mon.0 v2:192.168.123.106:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f84f0036600 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/674955736 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 msgr2=0x7f850810f990 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 --2- 192.168.123.106:0/674955736 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 0x7f850810f990 secure :-1 s=READY pgs=266 cs=0 l=1 rev1=1 crypto rx=0x7f84f0009920 tx=0x7f84f002ef20 comp rx=0 tx=0).stop 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/674955736 shutdown_connections 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 --2- 192.168.123.106:0/674955736 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f850810f590 0x7f850810f990 unknown :-1 s=CLOSED pgs=266 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/674955736 >> 192.168.123.106:0/674955736 conn(0x7f8508071f40 msgr2=0x7f8508074360 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/674955736 shutdown_connections 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/674955736 wait complete. 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 Processor -- start 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- start start 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 --2- >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7f850810f590 0x7f85081a5270 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 --2- >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f85081a57b0 0x7f85081aa8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 --2- >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7f85081aade0 0x7f85081ad1d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- --> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] -- mon_getmap magic: 0 -- 0x7f85081139f0 con 0x7f85081a57b0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- --> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] -- mon_getmap magic: 0 -- 0x7f8508113870 con 0x7f850810f590 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- --> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] -- mon_getmap magic: 0 -- 0x7f8508113b70 con 0x7f85081aade0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 --2- >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7f85081aade0 0x7f85081ad1d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 --2- >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7f85081aade0 0x7f85081ad1d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.106:3301/0 says I am v2:192.168.123.106:60600/0 (socket says 192.168.123.106:60600) 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 -- 192.168.123.106:0/3842259019 learned_addr learned my addr 192.168.123.106:0/3842259019 (peer_addr_for_me v2:192.168.123.106:0/0) 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85067fc640 1 --2- 192.168.123.106:0/3842259019 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f85081a57b0 0x7f85081aa8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f8506ffd640 1 --2- 192.168.123.106:0/3842259019 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7f850810f590 0x7f85081a5270 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 -- 192.168.123.106:0/3842259019 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7f850810f590 msgr2=0x7f85081a5270 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 --2- 192.168.123.106:0/3842259019 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7f850810f590 0x7f85081a5270 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 -- 192.168.123.106:0/3842259019 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f85081a57b0 msgr2=0x7f85081aa8a0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 --2- 192.168.123.106:0/3842259019 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f85081a57b0 0x7f85081aa8a0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 -- 192.168.123.106:0/3842259019 --> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f85081ad790 con 0x7f85081aade0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85067fc640 1 --2- 192.168.123.106:0/3842259019 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7f85081a57b0 0x7f85081aa8a0 unknown :-1 s=CLOSED pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_auth_done state changed! 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f85077fe640 1 --2- 192.168.123.106:0/3842259019 >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7f85081aade0 0x7f85081ad1d0 secure :-1 s=READY pgs=163 cs=0 l=1 rev1=1 crypto rx=0x7f84f800b9e0 tx=0x7f84f800beb0 comp rx=0 tx=0).ready entity=mon.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mon.2 v2:192.168.123.106:3301/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f84f800c7c0 con 0x7f85081aade0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 --> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] -- auth(proto 2 2 bytes epoch 0) -- 0x7f84e0003830 con 0x7f85081aade0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mon.2 v2:192.168.123.106:3301/0 2 ==== config(39 keys) ==== 1702+0+0 (secure 0 0 0) 0x7f84f8010070 con 0x7f85081aade0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/3842259019 --> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f850806e2f0 con 0x7f85081aade0 2026-03-08T23:11:01.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mon.2 v2:192.168.123.106:3301/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7f84f8013370 con 0x7f85081aade0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f850d3c9640 1 -- 192.168.123.106:0/3842259019 --> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] -- mon_subscribe({osdmap=0}) -- 0x7f850806e880 con 0x7f85081aade0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mon.2 v2:192.168.123.106:3301/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x7f850806e880 con 0x7f85081aade0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 0 cephx client: could not set rotating key: decode_decrypt failed. error:bad magic in decode_decrypt, 10349514148093401050 != 18374858748799134293 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mon.2 v2:192.168.123.106:3301/0 5 ==== mgrmap(e 21) ==== 100060+0+0 (secure 0 0 0) 0x7f84f800c960 con 0x7f85081aade0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 --2- 192.168.123.106:0/3842259019 >> v2:192.168.123.106:6800/1959071245 conn(0x7f84e0077c80 0x7f84e007a140 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 --> v2:192.168.123.106:6800/1959071245 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7f84e007a810 con 0x7f84e0077c80 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.651+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mon.2 v2:192.168.123.106:3301/0 6 ==== osd_map(68..68 src has 1..68) ==== 6181+0+0 (secure 0 0 0) 0x7f84f8099ac0 con 0x7f85081aade0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.655+0000 7f8506ffd640 1 --2- 192.168.123.106:0/3842259019 >> v2:192.168.123.106:6800/1959071245 conn(0x7f84e0077c80 0x7f84e007a140 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.655+0000 7f8506ffd640 1 --2- 192.168.123.106:0/3842259019 >> v2:192.168.123.106:6800/1959071245 conn(0x7f84e0077c80 0x7f84e007a140 secure :-1 s=READY pgs=156 cs=0 l=1 rev1=1 crypto rx=0x7f84f0002410 tx=0x7f84f0031040 comp rx=0 tx=0).ready entity=mgr.24419 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: 2026-03-08T23:11:01.655+0000 7f84e7fff640 1 -- 192.168.123.106:0/3842259019 <== mgr.24419 v2:192.168.123.106:6800/1959071245 1 ==== command_reply(tid 0: -13 access denied: does your client key have mgr caps? See http://docs.ceph.com/en/latest/mgr/administrator/#client-authentication) ==== 134+0+0 (secure 0 0 0) 0x7f84e007a810 con 0x7f84e0077c80 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: Error EACCES: problem getting command descriptions from mgr.y 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: debug 2026-03-08T23:11:01.679+0000 7f759755e640 -1 mgr handle_mgr_map I was active but no longer am 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: ignoring --setuser ceph since I am not root 2026-03-08T23:11:01.780 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: ignoring --setgroup ceph since I am not root 2026-03-08T23:11:02.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:01 vm11 bash[24047]: [08/Mar/2026:23:11:01] ENGINE Bus STOPPING 2026-03-08T23:11:02.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:01 vm11 bash[24047]: [08/Mar/2026:23:11:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-08T23:11:02.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:01 vm11 bash[24047]: [08/Mar/2026:23:11:01] ENGINE Bus STOPPED 2026-03-08T23:11:02.170 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: debug 2026-03-08T23:11:01.779+0000 7f8b684d8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:11:02.170 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: debug 2026-03-08T23:11:01.811+0000 7f8b684d8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:11:02.170 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:01 vm06 bash[20883]: debug 2026-03-08T23:11:01.915+0000 7f8b684d8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:11:02.448 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.147831+0000 mgr.y (mgr.24419) 333 : audit [DBG] from='client.15174 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.y", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.147831+0000 mgr.y (mgr.24419) 333 : audit [DBG] from='client.15174 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.y", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cephadm 2026-03-08T23:11:01.148256+0000 mgr.y (mgr.24419) 334 : cephadm [INF] Schedule rotate-key daemon mgr.y 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cephadm 2026-03-08T23:11:01.148256+0000 mgr.y (mgr.24419) 334 : cephadm [INF] Schedule rotate-key daemon mgr.y 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cephadm 2026-03-08T23:11:01.187651+0000 mgr.y (mgr.24419) 335 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cephadm 2026-03-08T23:11:01.187651+0000 mgr.y (mgr.24419) 335 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cephadm 2026-03-08T23:11:01.197997+0000 mgr.y (mgr.24419) 336 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cephadm 2026-03-08T23:11:01.197997+0000 mgr.y (mgr.24419) 336 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.589082+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.589082+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.597282+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.597282+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.664294+0000 mon.c (mon.2) 170 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.664294+0000 mon.c (mon.2) 170 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.664544+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.664544+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:01.664826+0000 mon.a (mon.0) 927 : cluster [INF] Activating manager daemon x 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:01.664826+0000 mon.a (mon.0) 927 : cluster [INF] Activating manager daemon x 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.669479+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.669479+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.678665+0000 mon.b (mon.1) 50 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.678665+0000 mon.b (mon.1) 50 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:01.678673+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:01.678673+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.679001+0000 mon.b (mon.1) 51 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.679001+0000 mon.b (mon.1) 51 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:01.679078+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e22: x(active, starting, since 0.0142475s) 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:01.679078+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e22: x(active, starting, since 0.0142475s) 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.679537+0000 mon.b (mon.1) 52 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.679537+0000 mon.b (mon.1) 52 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.679866+0000 mon.b (mon.1) 53 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.679866+0000 mon.b (mon.1) 53 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.680683+0000 mon.b (mon.1) 54 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.680683+0000 mon.b (mon.1) 54 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.681250+0000 mon.b (mon.1) 55 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.681250+0000 mon.b (mon.1) 55 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.681726+0000 mon.b (mon.1) 56 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.681726+0000 mon.b (mon.1) 56 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.682326+0000 mon.b (mon.1) 57 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.682326+0000 mon.b (mon.1) 57 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.682840+0000 mon.b (mon.1) 58 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.682840+0000 mon.b (mon.1) 58 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.683573+0000 mon.b (mon.1) 59 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.683573+0000 mon.b (mon.1) 59 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.683971+0000 mon.b (mon.1) 60 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.683971+0000 mon.b (mon.1) 60 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.684456+0000 mon.b (mon.1) 61 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.684456+0000 mon.b (mon.1) 61 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.685144+0000 mon.b (mon.1) 62 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.685144+0000 mon.b (mon.1) 62 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.685522+0000 mon.b (mon.1) 63 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.685522+0000 mon.b (mon.1) 63 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.686122+0000 mon.b (mon.1) 64 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:01.686122+0000 mon.b (mon.1) 64 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:02.132146+0000 mon.a (mon.0) 931 : cluster [INF] Manager daemon x is now available 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: cluster 2026-03-08T23:11:02.132146+0000 mon.a (mon.0) 931 : cluster [INF] Manager daemon x is now available 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.155903+0000 mon.b (mon.1) 65 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.155903+0000 mon.b (mon.1) 65 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:02.449 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.161000+0000 mon.b (mon.1) 66 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.161000+0000 mon.b (mon.1) 66 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.167846+0000 mon.b (mon.1) 67 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.167846+0000 mon.b (mon.1) 67 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.170261+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.170261+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.209748+0000 mon.b (mon.1) 68 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.209748+0000 mon.b (mon.1) 68 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.212283+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:02 vm06 bash[20625]: audit 2026-03-08T23:11:02.212283+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: debug 2026-03-08T23:11:02.167+0000 7f8b684d8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.147831+0000 mgr.y (mgr.24419) 333 : audit [DBG] from='client.15174 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.y", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.147831+0000 mgr.y (mgr.24419) 333 : audit [DBG] from='client.15174 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.y", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cephadm 2026-03-08T23:11:01.148256+0000 mgr.y (mgr.24419) 334 : cephadm [INF] Schedule rotate-key daemon mgr.y 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cephadm 2026-03-08T23:11:01.148256+0000 mgr.y (mgr.24419) 334 : cephadm [INF] Schedule rotate-key daemon mgr.y 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cephadm 2026-03-08T23:11:01.187651+0000 mgr.y (mgr.24419) 335 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cephadm 2026-03-08T23:11:01.187651+0000 mgr.y (mgr.24419) 335 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cephadm 2026-03-08T23:11:01.197997+0000 mgr.y (mgr.24419) 336 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cephadm 2026-03-08T23:11:01.197997+0000 mgr.y (mgr.24419) 336 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.589082+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.589082+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.597282+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.597282+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.664294+0000 mon.c (mon.2) 170 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.664294+0000 mon.c (mon.2) 170 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.664544+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.664544+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:01.664826+0000 mon.a (mon.0) 927 : cluster [INF] Activating manager daemon x 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:01.664826+0000 mon.a (mon.0) 927 : cluster [INF] Activating manager daemon x 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.669479+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.669479+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.678665+0000 mon.b (mon.1) 50 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.678665+0000 mon.b (mon.1) 50 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:01.678673+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:01.678673+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.679001+0000 mon.b (mon.1) 51 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.679001+0000 mon.b (mon.1) 51 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:01.679078+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e22: x(active, starting, since 0.0142475s) 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:01.679078+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e22: x(active, starting, since 0.0142475s) 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.679537+0000 mon.b (mon.1) 52 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.679537+0000 mon.b (mon.1) 52 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.679866+0000 mon.b (mon.1) 53 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.679866+0000 mon.b (mon.1) 53 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.680683+0000 mon.b (mon.1) 54 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.680683+0000 mon.b (mon.1) 54 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.681250+0000 mon.b (mon.1) 55 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.681250+0000 mon.b (mon.1) 55 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.681726+0000 mon.b (mon.1) 56 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.681726+0000 mon.b (mon.1) 56 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.682326+0000 mon.b (mon.1) 57 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:02.450 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.682326+0000 mon.b (mon.1) 57 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.682840+0000 mon.b (mon.1) 58 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.682840+0000 mon.b (mon.1) 58 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.683573+0000 mon.b (mon.1) 59 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.683573+0000 mon.b (mon.1) 59 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.683971+0000 mon.b (mon.1) 60 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.683971+0000 mon.b (mon.1) 60 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.684456+0000 mon.b (mon.1) 61 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.684456+0000 mon.b (mon.1) 61 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.685144+0000 mon.b (mon.1) 62 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.685144+0000 mon.b (mon.1) 62 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.685522+0000 mon.b (mon.1) 63 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.685522+0000 mon.b (mon.1) 63 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.686122+0000 mon.b (mon.1) 64 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:01.686122+0000 mon.b (mon.1) 64 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:02.132146+0000 mon.a (mon.0) 931 : cluster [INF] Manager daemon x is now available 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: cluster 2026-03-08T23:11:02.132146+0000 mon.a (mon.0) 931 : cluster [INF] Manager daemon x is now available 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.155903+0000 mon.b (mon.1) 65 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.155903+0000 mon.b (mon.1) 65 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.161000+0000 mon.b (mon.1) 66 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.161000+0000 mon.b (mon.1) 66 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.167846+0000 mon.b (mon.1) 67 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.167846+0000 mon.b (mon.1) 67 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.170261+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.170261+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.209748+0000 mon.b (mon.1) 68 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.209748+0000 mon.b (mon.1) 68 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.212283+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.451 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:02 vm06 bash[27746]: audit 2026-03-08T23:11:02.212283+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.147831+0000 mgr.y (mgr.24419) 333 : audit [DBG] from='client.15174 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.y", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.147831+0000 mgr.y (mgr.24419) 333 : audit [DBG] from='client.15174 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.y", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cephadm 2026-03-08T23:11:01.148256+0000 mgr.y (mgr.24419) 334 : cephadm [INF] Schedule rotate-key daemon mgr.y 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cephadm 2026-03-08T23:11:01.148256+0000 mgr.y (mgr.24419) 334 : cephadm [INF] Schedule rotate-key daemon mgr.y 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cephadm 2026-03-08T23:11:01.187651+0000 mgr.y (mgr.24419) 335 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cephadm 2026-03-08T23:11:01.187651+0000 mgr.y (mgr.24419) 335 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cephadm 2026-03-08T23:11:01.197997+0000 mgr.y (mgr.24419) 336 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cephadm 2026-03-08T23:11:01.197997+0000 mgr.y (mgr.24419) 336 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.589082+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.589082+0000 mon.a (mon.0) 924 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.597282+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.597282+0000 mon.a (mon.0) 925 : audit [INF] from='mgr.24419 ' entity='mgr.y' 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.664294+0000 mon.c (mon.2) 170 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.664294+0000 mon.c (mon.2) 170 : audit [INF] from='mgr.24419 192.168.123.106:0/274162966' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.664544+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.664544+0000 mon.a (mon.0) 926 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd=[{"prefix": "mgr fail", "who": "y"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:01.664826+0000 mon.a (mon.0) 927 : cluster [INF] Activating manager daemon x 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:01.664826+0000 mon.a (mon.0) 927 : cluster [INF] Activating manager daemon x 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.669479+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.669479+0000 mon.a (mon.0) 928 : audit [INF] from='mgr.24419 ' entity='mgr.y' cmd='[{"prefix": "mgr fail", "who": "y"}]': finished 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.678665+0000 mon.b (mon.1) 50 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.678665+0000 mon.b (mon.1) 50 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:01.678673+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:01.678673+0000 mon.a (mon.0) 929 : cluster [DBG] osdmap e69: 8 total, 8 up, 8 in 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.679001+0000 mon.b (mon.1) 51 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.679001+0000 mon.b (mon.1) 51 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:01.679078+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e22: x(active, starting, since 0.0142475s) 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:01.679078+0000 mon.a (mon.0) 930 : cluster [DBG] mgrmap e22: x(active, starting, since 0.0142475s) 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.679537+0000 mon.b (mon.1) 52 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.679537+0000 mon.b (mon.1) 52 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.679866+0000 mon.b (mon.1) 53 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.679866+0000 mon.b (mon.1) 53 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.680683+0000 mon.b (mon.1) 54 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.680683+0000 mon.b (mon.1) 54 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.681250+0000 mon.b (mon.1) 55 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.681250+0000 mon.b (mon.1) 55 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.681726+0000 mon.b (mon.1) 56 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.681726+0000 mon.b (mon.1) 56 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.682326+0000 mon.b (mon.1) 57 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.682326+0000 mon.b (mon.1) 57 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.682840+0000 mon.b (mon.1) 58 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.682840+0000 mon.b (mon.1) 58 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.683573+0000 mon.b (mon.1) 59 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.683573+0000 mon.b (mon.1) 59 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.683971+0000 mon.b (mon.1) 60 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.683971+0000 mon.b (mon.1) 60 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.684456+0000 mon.b (mon.1) 61 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.684456+0000 mon.b (mon.1) 61 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.685144+0000 mon.b (mon.1) 62 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.685144+0000 mon.b (mon.1) 62 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.685522+0000 mon.b (mon.1) 63 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.685522+0000 mon.b (mon.1) 63 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.686122+0000 mon.b (mon.1) 64 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:01.686122+0000 mon.b (mon.1) 64 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:02.132146+0000 mon.a (mon.0) 931 : cluster [INF] Manager daemon x is now available 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: cluster 2026-03-08T23:11:02.132146+0000 mon.a (mon.0) 931 : cluster [INF] Manager daemon x is now available 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.155903+0000 mon.b (mon.1) 65 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.155903+0000 mon.b (mon.1) 65 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.161000+0000 mon.b (mon.1) 66 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.161000+0000 mon.b (mon.1) 66 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.167846+0000 mon.b (mon.1) 67 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.167846+0000 mon.b (mon.1) 67 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.170261+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.170261+0000 mon.a (mon.0) 932 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.209748+0000 mon.b (mon.1) 68 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.209748+0000 mon.b (mon.1) 68 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.212283+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:02 vm11 bash[23232]: audit 2026-03-08T23:11:02.212283+0000 mon.a (mon.0) 933 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:02.560 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:02 vm11 bash[24047]: [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:02.560 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:02 vm11 bash[24047]: [08/Mar/2026:23:11:02] ENGINE Serving on http://:::9283 2026-03-08T23:11:02.560 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:02 vm11 bash[24047]: [08/Mar/2026:23:11:02] ENGINE Bus STARTED 2026-03-08T23:11:02.744 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: debug 2026-03-08T23:11:02.635+0000 7f8b684d8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:11:03.007 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: debug 2026-03-08T23:11:02.743+0000 7f8b684d8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:11:03.007 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:11:03.007 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:11:03.007 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: from numpy import show_config as show_numpy_config 2026-03-08T23:11:03.007 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:02 vm06 bash[20883]: debug 2026-03-08T23:11:02.871+0000 7f8b684d8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:11:03.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:11:02 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:11:03.278 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.003+0000 7f8b684d8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:11:03.279 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.043+0000 7f8b684d8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:11:03.279 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.075+0000 7f8b684d8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:11:03.279 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.115+0000 7f8b684d8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:11:03.279 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.163+0000 7f8b684d8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cluster 2026-03-08T23:11:02.691802+0000 mon.a (mon.0) 934 : cluster [DBG] mgrmap e23: x(active, since 1.02696s) 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cluster 2026-03-08T23:11:02.691802+0000 mon.a (mon.0) 934 : cluster [DBG] mgrmap e23: x(active, since 1.02696s) 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:02.985962+0000 mgr.x (mgr.24448) 3 : cephadm [INF] [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:02.985962+0000 mgr.x (mgr.24448) 3 : cephadm [INF] [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.093401+0000 mgr.x (mgr.24448) 4 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on https://192.168.123.111:7150 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.093401+0000 mgr.x (mgr.24448) 4 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on https://192.168.123.111:7150 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.093942+0000 mgr.x (mgr.24448) 5 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Client ('192.168.123.111', 46842) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.093942+0000 mgr.x (mgr.24448) 5 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Client ('192.168.123.111', 46842) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.194759+0000 mgr.x (mgr.24448) 6 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on http://192.168.123.111:8765 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.194759+0000 mgr.x (mgr.24448) 6 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on http://192.168.123.111:8765 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.194822+0000 mgr.x (mgr.24448) 7 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Bus STARTED 2026-03-08T23:11:03.849 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:03 vm06 bash[20625]: cephadm 2026-03-08T23:11:03.194822+0000 mgr.x (mgr.24448) 7 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Bus STARTED 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.583+0000 7f8b684d8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.619+0000 7f8b684d8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.655+0000 7f8b684d8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.803+0000 7f8b684d8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cluster 2026-03-08T23:11:02.691802+0000 mon.a (mon.0) 934 : cluster [DBG] mgrmap e23: x(active, since 1.02696s) 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cluster 2026-03-08T23:11:02.691802+0000 mon.a (mon.0) 934 : cluster [DBG] mgrmap e23: x(active, since 1.02696s) 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:02.985962+0000 mgr.x (mgr.24448) 3 : cephadm [INF] [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:02.985962+0000 mgr.x (mgr.24448) 3 : cephadm [INF] [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.093401+0000 mgr.x (mgr.24448) 4 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on https://192.168.123.111:7150 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.093401+0000 mgr.x (mgr.24448) 4 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on https://192.168.123.111:7150 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.093942+0000 mgr.x (mgr.24448) 5 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Client ('192.168.123.111', 46842) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.093942+0000 mgr.x (mgr.24448) 5 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Client ('192.168.123.111', 46842) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.194759+0000 mgr.x (mgr.24448) 6 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on http://192.168.123.111:8765 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.194759+0000 mgr.x (mgr.24448) 6 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on http://192.168.123.111:8765 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.194822+0000 mgr.x (mgr.24448) 7 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Bus STARTED 2026-03-08T23:11:03.850 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:03 vm06 bash[27746]: cephadm 2026-03-08T23:11:03.194822+0000 mgr.x (mgr.24448) 7 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Bus STARTED 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cluster 2026-03-08T23:11:02.691802+0000 mon.a (mon.0) 934 : cluster [DBG] mgrmap e23: x(active, since 1.02696s) 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cluster 2026-03-08T23:11:02.691802+0000 mon.a (mon.0) 934 : cluster [DBG] mgrmap e23: x(active, since 1.02696s) 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:02.985962+0000 mgr.x (mgr.24448) 3 : cephadm [INF] [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:02.985962+0000 mgr.x (mgr.24448) 3 : cephadm [INF] [08/Mar/2026:23:11:02] ENGINE Bus STARTING 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.093401+0000 mgr.x (mgr.24448) 4 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on https://192.168.123.111:7150 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.093401+0000 mgr.x (mgr.24448) 4 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on https://192.168.123.111:7150 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.093942+0000 mgr.x (mgr.24448) 5 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Client ('192.168.123.111', 46842) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.093942+0000 mgr.x (mgr.24448) 5 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Client ('192.168.123.111', 46842) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.194759+0000 mgr.x (mgr.24448) 6 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on http://192.168.123.111:8765 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.194759+0000 mgr.x (mgr.24448) 6 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Serving on http://192.168.123.111:8765 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.194822+0000 mgr.x (mgr.24448) 7 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Bus STARTED 2026-03-08T23:11:04.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:03 vm11 bash[23232]: cephadm 2026-03-08T23:11:03.194822+0000 mgr.x (mgr.24448) 7 : cephadm [INF] [08/Mar/2026:23:11:03] ENGINE Bus STARTED 2026-03-08T23:11:04.127 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.847+0000 7f8b684d8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:11:04.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.887+0000 7f8b684d8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:11:04.128 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:03 vm06 bash[20883]: debug 2026-03-08T23:11:03.987+0000 7f8b684d8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:11:04.518 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: debug 2026-03-08T23:11:04.127+0000 7f8b684d8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:11:04.518 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: debug 2026-03-08T23:11:04.291+0000 7f8b684d8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:11:04.518 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: debug 2026-03-08T23:11:04.327+0000 7f8b684d8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:11:04.518 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: debug 2026-03-08T23:11:04.371+0000 7f8b684d8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:11:04.778 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:04 vm06 bash[20625]: cluster 2026-03-08T23:11:03.681499+0000 mgr.x (mgr.24448) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:04 vm06 bash[20625]: cluster 2026-03-08T23:11:03.681499+0000 mgr.x (mgr.24448) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: debug 2026-03-08T23:11:04.515+0000 7f8b684d8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: debug 2026-03-08T23:11:04.727+0000 7f8b684d8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: [08/Mar/2026:23:11:04] ENGINE Bus STARTING 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: CherryPy Checker: 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: The Application mounted at '' has an empty config. 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:04 vm06 bash[27746]: cluster 2026-03-08T23:11:03.681499+0000 mgr.x (mgr.24448) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:04.779 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:04 vm06 bash[27746]: cluster 2026-03-08T23:11:03.681499+0000 mgr.x (mgr.24448) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:05.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:04 vm11 bash[23232]: cluster 2026-03-08T23:11:03.681499+0000 mgr.x (mgr.24448) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:05.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:04 vm11 bash[23232]: cluster 2026-03-08T23:11:03.681499+0000 mgr.x (mgr.24448) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:05.278 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: [08/Mar/2026:23:11:04] ENGINE Serving on http://:::9283 2026-03-08T23:11:05.278 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:04 vm06 bash[20883]: [08/Mar/2026:23:11:04] ENGINE Bus STARTED 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: cluster 2026-03-08T23:11:04.713244+0000 mon.a (mon.0) 935 : cluster [DBG] mgrmap e24: x(active, since 3s) 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: cluster 2026-03-08T23:11:04.713244+0000 mon.a (mon.0) 935 : cluster [DBG] mgrmap e24: x(active, since 3s) 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: cluster 2026-03-08T23:11:04.732955+0000 mon.a (mon.0) 936 : cluster [DBG] Standby manager daemon y started 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: cluster 2026-03-08T23:11:04.732955+0000 mon.a (mon.0) 936 : cluster [DBG] Standby manager daemon y started 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.736664+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.736664+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.737001+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.737001+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.737608+0000 mon.a (mon.0) 939 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.737608+0000 mon.a (mon.0) 939 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.737772+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:05 vm06 bash[20625]: audit 2026-03-08T23:11:04.737772+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: cluster 2026-03-08T23:11:04.713244+0000 mon.a (mon.0) 935 : cluster [DBG] mgrmap e24: x(active, since 3s) 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: cluster 2026-03-08T23:11:04.713244+0000 mon.a (mon.0) 935 : cluster [DBG] mgrmap e24: x(active, since 3s) 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: cluster 2026-03-08T23:11:04.732955+0000 mon.a (mon.0) 936 : cluster [DBG] Standby manager daemon y started 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: cluster 2026-03-08T23:11:04.732955+0000 mon.a (mon.0) 936 : cluster [DBG] Standby manager daemon y started 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.736664+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.736664+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.737001+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.737001+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.737608+0000 mon.a (mon.0) 939 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.737608+0000 mon.a (mon.0) 939 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.737772+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:06.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:05 vm06 bash[27746]: audit 2026-03-08T23:11:04.737772+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: cluster 2026-03-08T23:11:04.713244+0000 mon.a (mon.0) 935 : cluster [DBG] mgrmap e24: x(active, since 3s) 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: cluster 2026-03-08T23:11:04.713244+0000 mon.a (mon.0) 935 : cluster [DBG] mgrmap e24: x(active, since 3s) 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: cluster 2026-03-08T23:11:04.732955+0000 mon.a (mon.0) 936 : cluster [DBG] Standby manager daemon y started 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: cluster 2026-03-08T23:11:04.732955+0000 mon.a (mon.0) 936 : cluster [DBG] Standby manager daemon y started 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.736664+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.736664+0000 mon.a (mon.0) 937 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.737001+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.737001+0000 mon.a (mon.0) 938 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.737608+0000 mon.a (mon.0) 939 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.737608+0000 mon.a (mon.0) 939 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.737772+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:06.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:05 vm11 bash[23232]: audit 2026-03-08T23:11:04.737772+0000 mon.a (mon.0) 940 : audit [DBG] from='mgr.? 192.168.123.106:0/2541662369' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:06.180 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key mgr.y 2026-03-08T23:11:06.352 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAFAq5pr2o5CxAAcjSPhvgIVSYRTTCpPFds0A== 2026-03-08T23:11:06.352 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQCw/q1ptko8LBAAbRDcBdFGfF7luVs55STIDw== == AQAFAq5pr2o5CxAAcjSPhvgIVSYRTTCpPFds0A== ']' 2026-03-08T23:11:06.352 INFO:teuthology.orchestra.run.vm06.stderr:+ for f in osd.0 osd.1 osd.2 osd.3 osd.4 osd.5 osd.6 osd.7 mgr.y mgr.x 2026-03-08T23:11:06.353 INFO:teuthology.orchestra.run.vm06.stderr:+ echo 'rotating key for mgr.x' 2026-03-08T23:11:06.353 INFO:teuthology.orchestra.run.vm06.stdout:rotating key for mgr.x 2026-03-08T23:11:06.353 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key mgr.x 2026-03-08T23:11:06.529 INFO:teuthology.orchestra.run.vm06.stderr:+ K=AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== 2026-03-08T23:11:06.529 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== 2026-03-08T23:11:06.529 INFO:teuthology.orchestra.run.vm06.stderr:+ ceph orch daemon rotate-key mgr.x 2026-03-08T23:11:06.692 INFO:teuthology.orchestra.run.vm06.stdout:Scheduled to rotate-key mgr.x on host 'vm11' 2026-03-08T23:11:06.700 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== == AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== ']' 2026-03-08T23:11:06.700 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: cluster 2026-03-08T23:11:05.681852+0000 mgr.x (mgr.24448) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: cluster 2026-03-08T23:11:05.681852+0000 mgr.x (mgr.24448) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:05.728143+0000 mon.b (mon.1) 69 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:05.728143+0000 mon.b (mon.1) 69 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: cluster 2026-03-08T23:11:05.730859+0000 mon.a (mon.0) 941 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: cluster 2026-03-08T23:11:05.730859+0000 mon.a (mon.0) 941 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.346680+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.106:0/1787676810' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.346680+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.106:0/1787676810' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.520455+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.106:0/1334566085' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.520455+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.106:0/1334566085' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.685906+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.685906+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.692668+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:06 vm06 bash[20625]: audit 2026-03-08T23:11:06.692668+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: cluster 2026-03-08T23:11:05.681852+0000 mgr.x (mgr.24448) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: cluster 2026-03-08T23:11:05.681852+0000 mgr.x (mgr.24448) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:05.728143+0000 mon.b (mon.1) 69 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:05.728143+0000 mon.b (mon.1) 69 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: cluster 2026-03-08T23:11:05.730859+0000 mon.a (mon.0) 941 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: cluster 2026-03-08T23:11:05.730859+0000 mon.a (mon.0) 941 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.346680+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.106:0/1787676810' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.346680+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.106:0/1787676810' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.520455+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.106:0/1334566085' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.520455+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.106:0/1334566085' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.685906+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.685906+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.692668+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:06 vm06 bash[27746]: audit 2026-03-08T23:11:06.692668+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: cluster 2026-03-08T23:11:05.681852+0000 mgr.x (mgr.24448) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: cluster 2026-03-08T23:11:05.681852+0000 mgr.x (mgr.24448) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:05.728143+0000 mon.b (mon.1) 69 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:05.728143+0000 mon.b (mon.1) 69 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: cluster 2026-03-08T23:11:05.730859+0000 mon.a (mon.0) 941 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: cluster 2026-03-08T23:11:05.730859+0000 mon.a (mon.0) 941 : cluster [DBG] mgrmap e25: x(active, since 4s), standbys: y 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.346680+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.106:0/1787676810' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.346680+0000 mon.a (mon.0) 942 : audit [INF] from='client.? 192.168.123.106:0/1787676810' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.y"}]: dispatch 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.520455+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.106:0/1334566085' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.520455+0000 mon.b (mon.1) 70 : audit [INF] from='client.? 192.168.123.106:0/1334566085' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.685906+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.685906+0000 mon.a (mon.0) 943 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.692668+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.058 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:06 vm11 bash[23232]: audit 2026-03-08T23:11:06.692668+0000 mon.a (mon.0) 944 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: audit 2026-03-08T23:11:06.674618+0000 mgr.x (mgr.24448) 10 : audit [DBG] from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: audit 2026-03-08T23:11:06.674618+0000 mgr.x (mgr.24448) 10 : audit [DBG] from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: cephadm 2026-03-08T23:11:06.675224+0000 mgr.x (mgr.24448) 11 : cephadm [INF] Schedule rotate-key daemon mgr.x 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: cephadm 2026-03-08T23:11:06.675224+0000 mgr.x (mgr.24448) 11 : cephadm [INF] Schedule rotate-key daemon mgr.x 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: cluster 2026-03-08T23:11:06.745279+0000 mon.a (mon.0) 945 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: cluster 2026-03-08T23:11:06.745279+0000 mon.a (mon.0) 945 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: audit 2026-03-08T23:11:07.690028+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: audit 2026-03-08T23:11:07.690028+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: audit 2026-03-08T23:11:07.709324+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.809 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:07 vm11 bash[23232]: audit 2026-03-08T23:11:07.709324+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:07.809 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:11:07 vm11 bash[51823]: ts=2026-03-08T23:11:07.521Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:11:07.809 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:11:07 vm11 bash[51823]: ts=2026-03-08T23:11:07.521Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:11:07.809 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:11:07 vm11 bash[51823]: ts=2026-03-08T23:11:07.529Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:11:07.809 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:11:07 vm11 bash[51823]: ts=2026-03-08T23:11:07.529Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:11:07.809 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:11:07 vm11 bash[51823]: ts=2026-03-08T23:11:07.529Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:11:07.809 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:11:07 vm11 bash[51823]: ts=2026-03-08T23:11:07.529Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: audit 2026-03-08T23:11:06.674618+0000 mgr.x (mgr.24448) 10 : audit [DBG] from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: audit 2026-03-08T23:11:06.674618+0000 mgr.x (mgr.24448) 10 : audit [DBG] from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: cephadm 2026-03-08T23:11:06.675224+0000 mgr.x (mgr.24448) 11 : cephadm [INF] Schedule rotate-key daemon mgr.x 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: cephadm 2026-03-08T23:11:06.675224+0000 mgr.x (mgr.24448) 11 : cephadm [INF] Schedule rotate-key daemon mgr.x 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: cluster 2026-03-08T23:11:06.745279+0000 mon.a (mon.0) 945 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: cluster 2026-03-08T23:11:06.745279+0000 mon.a (mon.0) 945 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: audit 2026-03-08T23:11:07.690028+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: audit 2026-03-08T23:11:07.690028+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: audit 2026-03-08T23:11:07.709324+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:07 vm06 bash[20625]: audit 2026-03-08T23:11:07.709324+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: audit 2026-03-08T23:11:06.674618+0000 mgr.x (mgr.24448) 10 : audit [DBG] from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: audit 2026-03-08T23:11:06.674618+0000 mgr.x (mgr.24448) 10 : audit [DBG] from='client.15219 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "rotate-key", "name": "mgr.x", "target": ["mon-mgr", ""]}]: dispatch 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: cephadm 2026-03-08T23:11:06.675224+0000 mgr.x (mgr.24448) 11 : cephadm [INF] Schedule rotate-key daemon mgr.x 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: cephadm 2026-03-08T23:11:06.675224+0000 mgr.x (mgr.24448) 11 : cephadm [INF] Schedule rotate-key daemon mgr.x 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: cluster 2026-03-08T23:11:06.745279+0000 mon.a (mon.0) 945 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: cluster 2026-03-08T23:11:06.745279+0000 mon.a (mon.0) 945 : cluster [DBG] mgrmap e26: x(active, since 5s), standbys: y 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: audit 2026-03-08T23:11:07.690028+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: audit 2026-03-08T23:11:07.690028+0000 mon.a (mon.0) 946 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: audit 2026-03-08T23:11:07.709324+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:08.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:07 vm06 bash[27746]: audit 2026-03-08T23:11:07.709324+0000 mon.a (mon.0) 947 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: cluster 2026-03-08T23:11:07.682129+0000 mgr.x (mgr.24448) 12 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: cluster 2026-03-08T23:11:07.682129+0000 mgr.x (mgr.24448) 12 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:07.917726+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:07.917726+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:07.926482+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:07.926482+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.303948+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.303948+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.311556+0000 mon.b (mon.1) 71 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.311556+0000 mon.b (mon.1) 71 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.311748+0000 mon.a (mon.0) 951 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.311748+0000 mon.a (mon.0) 951 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.313932+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.313932+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.493578+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.493578+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.501180+0000 mon.b (mon.1) 72 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.501180+0000 mon.b (mon.1) 72 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.501319+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.501319+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.502826+0000 mon.b (mon.1) 73 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.502826+0000 mon.b (mon.1) 73 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.503623+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.503623+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.503820+0000 mon.b (mon.1) 74 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.503820+0000 mon.b (mon.1) 74 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:09.280 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.654908+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.654908+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.663576+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.663576+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.671000+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.671000+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.678701+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.678701+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.685473+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.685473+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.696553+0000 mon.b (mon.1) 75 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.696553+0000 mon.b (mon.1) 75 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.699063+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.699063+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.701560+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.701560+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.702788+0000 mon.b (mon.1) 76 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.702788+0000 mon.b (mon.1) 76 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.703979+0000 mon.b (mon.1) 77 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.703979+0000 mon.b (mon.1) 77 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.704766+0000 mon.b (mon.1) 78 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.704766+0000 mon.b (mon.1) 78 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.705304+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:08 vm06 bash[20625]: audit 2026-03-08T23:11:08.705304+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: cluster 2026-03-08T23:11:07.682129+0000 mgr.x (mgr.24448) 12 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: cluster 2026-03-08T23:11:07.682129+0000 mgr.x (mgr.24448) 12 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:07.917726+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:07.917726+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:07.926482+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:07.926482+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.303948+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.303948+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.311556+0000 mon.b (mon.1) 71 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.311556+0000 mon.b (mon.1) 71 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.311748+0000 mon.a (mon.0) 951 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.311748+0000 mon.a (mon.0) 951 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.313932+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.313932+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.493578+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.493578+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.501180+0000 mon.b (mon.1) 72 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.501180+0000 mon.b (mon.1) 72 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.501319+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.501319+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.502826+0000 mon.b (mon.1) 73 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.502826+0000 mon.b (mon.1) 73 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.503623+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.503623+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.503820+0000 mon.b (mon.1) 74 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.503820+0000 mon.b (mon.1) 74 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.654908+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.281 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.654908+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.663576+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.663576+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.671000+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.671000+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.678701+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.678701+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.685473+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.685473+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.696553+0000 mon.b (mon.1) 75 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.696553+0000 mon.b (mon.1) 75 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.699063+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.699063+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.701560+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.701560+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.702788+0000 mon.b (mon.1) 76 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.702788+0000 mon.b (mon.1) 76 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.703979+0000 mon.b (mon.1) 77 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.703979+0000 mon.b (mon.1) 77 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.704766+0000 mon.b (mon.1) 78 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.704766+0000 mon.b (mon.1) 78 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.705304+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.282 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:08 vm06 bash[27746]: audit 2026-03-08T23:11:08.705304+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: cluster 2026-03-08T23:11:07.682129+0000 mgr.x (mgr.24448) 12 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: cluster 2026-03-08T23:11:07.682129+0000 mgr.x (mgr.24448) 12 : cluster [DBG] pgmap v6: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:07.917726+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:07.917726+0000 mon.a (mon.0) 948 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:07.926482+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:07.926482+0000 mon.a (mon.0) 949 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.303948+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.303948+0000 mon.a (mon.0) 950 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.311556+0000 mon.b (mon.1) 71 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.311556+0000 mon.b (mon.1) 71 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.311748+0000 mon.a (mon.0) 951 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.311748+0000 mon.a (mon.0) 951 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.313932+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.313932+0000 mon.a (mon.0) 952 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm06", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.493578+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.493578+0000 mon.a (mon.0) 953 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.501180+0000 mon.b (mon.1) 72 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.501180+0000 mon.b (mon.1) 72 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.501319+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.308 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.501319+0000 mon.a (mon.0) 954 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.502826+0000 mon.b (mon.1) 73 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.502826+0000 mon.b (mon.1) 73 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.503623+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.503623+0000 mon.a (mon.0) 955 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "config rm", "who": "osd/host:vm11", "name": "osd_memory_target"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.503820+0000 mon.b (mon.1) 74 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.503820+0000 mon.b (mon.1) 74 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.654908+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.654908+0000 mon.a (mon.0) 956 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.663576+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.663576+0000 mon.a (mon.0) 957 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.671000+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.671000+0000 mon.a (mon.0) 958 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.678701+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.678701+0000 mon.a (mon.0) 959 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.685473+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.685473+0000 mon.a (mon.0) 960 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.696553+0000 mon.b (mon.1) 75 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.696553+0000 mon.b (mon.1) 75 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.699063+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.699063+0000 mon.a (mon.0) 961 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.701560+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.701560+0000 mon.a (mon.0) 962 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.y", "format": "json"}]': finished 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.702788+0000 mon.b (mon.1) 76 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.702788+0000 mon.b (mon.1) 76 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.703979+0000 mon.b (mon.1) 77 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.703979+0000 mon.b (mon.1) 77 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.704766+0000 mon.b (mon.1) 78 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.704766+0000 mon.b (mon.1) 78 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.705304+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:09.309 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:08 vm11 bash[23232]: audit 2026-03-08T23:11:08.705304+0000 mon.a (mon.0) 963 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.y", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.504963+0000 mgr.x (mgr.24448) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.504963+0000 mgr.x (mgr.24448) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.505172+0000 mgr.x (mgr.24448) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.505172+0000 mgr.x (mgr.24448) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.543043+0000 mgr.x (mgr.24448) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.543043+0000 mgr.x (mgr.24448) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.545943+0000 mgr.x (mgr.24448) 16 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.545943+0000 mgr.x (mgr.24448) 16 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.577888+0000 mgr.x (mgr.24448) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.577888+0000 mgr.x (mgr.24448) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.586079+0000 mgr.x (mgr.24448) 18 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.586079+0000 mgr.x (mgr.24448) 18 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.610618+0000 mgr.x (mgr.24448) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.610618+0000 mgr.x (mgr.24448) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.620574+0000 mgr.x (mgr.24448) 20 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.620574+0000 mgr.x (mgr.24448) 20 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.696295+0000 mgr.x (mgr.24448) 21 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.696295+0000 mgr.x (mgr.24448) 21 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.705939+0000 mgr.x (mgr.24448) 22 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: cephadm 2026-03-08T23:11:08.705939+0000 mgr.x (mgr.24448) 22 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: audit 2026-03-08T23:11:09.084307+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: audit 2026-03-08T23:11:09.084307+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: audit 2026-03-08T23:11:09.090538+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:10 vm06 bash[20625]: audit 2026-03-08T23:11:09.090538+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.279 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.504963+0000 mgr.x (mgr.24448) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.504963+0000 mgr.x (mgr.24448) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.505172+0000 mgr.x (mgr.24448) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.505172+0000 mgr.x (mgr.24448) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.543043+0000 mgr.x (mgr.24448) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.543043+0000 mgr.x (mgr.24448) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.545943+0000 mgr.x (mgr.24448) 16 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.545943+0000 mgr.x (mgr.24448) 16 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.577888+0000 mgr.x (mgr.24448) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.577888+0000 mgr.x (mgr.24448) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.586079+0000 mgr.x (mgr.24448) 18 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.586079+0000 mgr.x (mgr.24448) 18 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.610618+0000 mgr.x (mgr.24448) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.610618+0000 mgr.x (mgr.24448) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.620574+0000 mgr.x (mgr.24448) 20 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.620574+0000 mgr.x (mgr.24448) 20 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.696295+0000 mgr.x (mgr.24448) 21 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.696295+0000 mgr.x (mgr.24448) 21 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.705939+0000 mgr.x (mgr.24448) 22 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: cephadm 2026-03-08T23:11:08.705939+0000 mgr.x (mgr.24448) 22 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: audit 2026-03-08T23:11:09.084307+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: audit 2026-03-08T23:11:09.084307+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: audit 2026-03-08T23:11:09.090538+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.280 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:10 vm06 bash[27746]: audit 2026-03-08T23:11:09.090538+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.504963+0000 mgr.x (mgr.24448) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.504963+0000 mgr.x (mgr.24448) 13 : cephadm [INF] Updating vm06:/etc/ceph/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.505172+0000 mgr.x (mgr.24448) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.505172+0000 mgr.x (mgr.24448) 14 : cephadm [INF] Updating vm11:/etc/ceph/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.543043+0000 mgr.x (mgr.24448) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.543043+0000 mgr.x (mgr.24448) 15 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.545943+0000 mgr.x (mgr.24448) 16 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.545943+0000 mgr.x (mgr.24448) 16 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.conf 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.577888+0000 mgr.x (mgr.24448) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.577888+0000 mgr.x (mgr.24448) 17 : cephadm [INF] Updating vm11:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.586079+0000 mgr.x (mgr.24448) 18 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.586079+0000 mgr.x (mgr.24448) 18 : cephadm [INF] Updating vm06:/etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.610618+0000 mgr.x (mgr.24448) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.610618+0000 mgr.x (mgr.24448) 19 : cephadm [INF] Updating vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.620574+0000 mgr.x (mgr.24448) 20 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.620574+0000 mgr.x (mgr.24448) 20 : cephadm [INF] Updating vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/config/ceph.client.admin.keyring 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.696295+0000 mgr.x (mgr.24448) 21 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.696295+0000 mgr.x (mgr.24448) 21 : cephadm [INF] Rotating authentication key for mgr.y 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.705939+0000 mgr.x (mgr.24448) 22 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: cephadm 2026-03-08T23:11:08.705939+0000 mgr.x (mgr.24448) 22 : cephadm [INF] Reconfiguring daemon mgr.y on vm06 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: audit 2026-03-08T23:11:09.084307+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: audit 2026-03-08T23:11:09.084307+0000 mon.a (mon.0) 964 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: audit 2026-03-08T23:11:09.090538+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:10.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:10 vm11 bash[23232]: audit 2026-03-08T23:11:09.090538+0000 mon.a (mon.0) 965 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:11.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:10 vm06 bash[20883]: ::ffff:192.168.123.111 - - [08/Mar/2026:23:11:10] "GET /metrics HTTP/1.1" 200 - "" "Prometheus/2.51.0" 2026-03-08T23:11:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:11 vm06 bash[20625]: cluster 2026-03-08T23:11:09.682592+0000 mgr.x (mgr.24448) 23 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-08T23:11:11.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:11 vm06 bash[20625]: cluster 2026-03-08T23:11:09.682592+0000 mgr.x (mgr.24448) 23 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-08T23:11:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:11 vm06 bash[27746]: cluster 2026-03-08T23:11:09.682592+0000 mgr.x (mgr.24448) 23 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-08T23:11:11.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:11 vm06 bash[27746]: cluster 2026-03-08T23:11:09.682592+0000 mgr.x (mgr.24448) 23 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-08T23:11:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:11 vm11 bash[23232]: cluster 2026-03-08T23:11:09.682592+0000 mgr.x (mgr.24448) 23 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-08T23:11:11.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:11 vm11 bash[23232]: cluster 2026-03-08T23:11:09.682592+0000 mgr.x (mgr.24448) 23 : cluster [DBG] pgmap v7: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 26 KiB/s rd, 0 B/s wr, 11 op/s 2026-03-08T23:11:11.702 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key mgr.x 2026-03-08T23:11:11.883 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== 2026-03-08T23:11:11.883 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== == AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== ']' 2026-03-08T23:11:11.883 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:11:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:12 vm06 bash[20625]: audit 2026-03-08T23:11:11.875696+0000 mon.a (mon.0) 966 : audit [INF] from='client.? 192.168.123.106:0/3808015213' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:12.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:12 vm06 bash[20625]: audit 2026-03-08T23:11:11.875696+0000 mon.a (mon.0) 966 : audit [INF] from='client.? 192.168.123.106:0/3808015213' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:12.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:12 vm06 bash[27746]: audit 2026-03-08T23:11:11.875696+0000 mon.a (mon.0) 966 : audit [INF] from='client.? 192.168.123.106:0/3808015213' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:12.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:12 vm06 bash[27746]: audit 2026-03-08T23:11:11.875696+0000 mon.a (mon.0) 966 : audit [INF] from='client.? 192.168.123.106:0/3808015213' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:12 vm11 bash[23232]: audit 2026-03-08T23:11:11.875696+0000 mon.a (mon.0) 966 : audit [INF] from='client.? 192.168.123.106:0/3808015213' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:12.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:12 vm11 bash[23232]: audit 2026-03-08T23:11:11.875696+0000 mon.a (mon.0) 966 : audit [INF] from='client.? 192.168.123.106:0/3808015213' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:13.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:11:12 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:11:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:13 vm06 bash[20625]: cluster 2026-03-08T23:11:11.682967+0000 mgr.x (mgr.24448) 24 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-08T23:11:13.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:13 vm06 bash[20625]: cluster 2026-03-08T23:11:11.682967+0000 mgr.x (mgr.24448) 24 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-08T23:11:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:13 vm06 bash[27746]: cluster 2026-03-08T23:11:11.682967+0000 mgr.x (mgr.24448) 24 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-08T23:11:13.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:13 vm06 bash[27746]: cluster 2026-03-08T23:11:11.682967+0000 mgr.x (mgr.24448) 24 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-08T23:11:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:13 vm11 bash[23232]: cluster 2026-03-08T23:11:11.682967+0000 mgr.x (mgr.24448) 24 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-08T23:11:13.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:13 vm11 bash[23232]: cluster 2026-03-08T23:11:11.682967+0000 mgr.x (mgr.24448) 24 : cluster [DBG] pgmap v8: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-08T23:11:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:14 vm06 bash[20625]: audit 2026-03-08T23:11:12.709911+0000 mgr.x (mgr.24448) 25 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:11:14.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:14 vm06 bash[20625]: audit 2026-03-08T23:11:12.709911+0000 mgr.x (mgr.24448) 25 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:11:14.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:14 vm06 bash[27746]: audit 2026-03-08T23:11:12.709911+0000 mgr.x (mgr.24448) 25 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:11:14.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:14 vm06 bash[27746]: audit 2026-03-08T23:11:12.709911+0000 mgr.x (mgr.24448) 25 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:11:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:14 vm11 bash[23232]: audit 2026-03-08T23:11:12.709911+0000 mgr.x (mgr.24448) 25 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:11:14.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:14 vm11 bash[23232]: audit 2026-03-08T23:11:12.709911+0000 mgr.x (mgr.24448) 25 : audit [DBG] from='client.24421 -' entity='client.iscsi.iscsi.a' cmd=[{"prefix": "service status", "format": "json"}]: dispatch 2026-03-08T23:11:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:15 vm06 bash[20625]: cluster 2026-03-08T23:11:13.683472+0000 mgr.x (mgr.24448) 26 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:11:15.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:15 vm06 bash[20625]: cluster 2026-03-08T23:11:13.683472+0000 mgr.x (mgr.24448) 26 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:11:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:15 vm06 bash[27746]: cluster 2026-03-08T23:11:13.683472+0000 mgr.x (mgr.24448) 26 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:11:15.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:15 vm06 bash[27746]: cluster 2026-03-08T23:11:13.683472+0000 mgr.x (mgr.24448) 26 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:11:15.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:15 vm11 bash[23232]: cluster 2026-03-08T23:11:13.683472+0000 mgr.x (mgr.24448) 26 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:11:15.559 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:15 vm11 bash[23232]: cluster 2026-03-08T23:11:13.683472+0000 mgr.x (mgr.24448) 26 : cluster [DBG] pgmap v9: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s 2026-03-08T23:11:16.885 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key mgr.x 2026-03-08T23:11:17.084 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== 2026-03-08T23:11:17.084 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== == AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== ']' 2026-03-08T23:11:17.084 INFO:teuthology.orchestra.run.vm06.stderr:+ sleep 5 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:17 vm06 bash[20625]: cluster 2026-03-08T23:11:15.683775+0000 mgr.x (mgr.24448) 27 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:17 vm06 bash[20625]: cluster 2026-03-08T23:11:15.683775+0000 mgr.x (mgr.24448) 27 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:17 vm06 bash[20625]: audit 2026-03-08T23:11:17.077281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.106:0/1131779285' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:17 vm06 bash[20625]: audit 2026-03-08T23:11:17.077281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.106:0/1131779285' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:17 vm06 bash[27746]: cluster 2026-03-08T23:11:15.683775+0000 mgr.x (mgr.24448) 27 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:17 vm06 bash[27746]: cluster 2026-03-08T23:11:15.683775+0000 mgr.x (mgr.24448) 27 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:17 vm06 bash[27746]: audit 2026-03-08T23:11:17.077281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.106:0/1131779285' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:17.529 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:17 vm06 bash[27746]: audit 2026-03-08T23:11:17.077281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.106:0/1131779285' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:17.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:17 vm11 bash[23232]: cluster 2026-03-08T23:11:15.683775+0000 mgr.x (mgr.24448) 27 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:17.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:17 vm11 bash[23232]: cluster 2026-03-08T23:11:15.683775+0000 mgr.x (mgr.24448) 27 : cluster [DBG] pgmap v10: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:17.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:17 vm11 bash[23232]: audit 2026-03-08T23:11:17.077281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.106:0/1131779285' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:17.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:17 vm11 bash[23232]: audit 2026-03-08T23:11:17.077281+0000 mon.a (mon.0) 967 : audit [INF] from='client.? 192.168.123.106:0/1131779285' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:17.167935+0000 mon.b (mon.1) 79 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:17.167935+0000 mon.b (mon.1) 79 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.051619+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.051619+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.056925+0000 mon.b (mon.1) 80 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.056925+0000 mon.b (mon.1) 80 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.058044+0000 mon.a (mon.0) 969 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.058044+0000 mon.a (mon.0) 969 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.059845+0000 mon.a (mon.0) 970 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.059845+0000 mon.a (mon.0) 970 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.061329+0000 mon.b (mon.1) 81 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.061329+0000 mon.b (mon.1) 81 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.061949+0000 mon.a (mon.0) 971 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]': finished 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.061949+0000 mon.a (mon.0) 971 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]': finished 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.062565+0000 mon.b (mon.1) 82 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.062565+0000 mon.b (mon.1) 82 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.062906+0000 mon.b (mon.1) 83 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.062906+0000 mon.b (mon.1) 83 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.064160+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:18 vm06 bash[20625]: audit 2026-03-08T23:11:18.064160+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:17 vm06 systemd[1]: Stopping Ceph mgr.y for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:17 vm06 bash[62869]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-mgr-y 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service: Main process exited, code=exited, status=143/n/a 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service: Failed with result 'exit-code'. 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 systemd[1]: Stopped Ceph mgr.y for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 systemd[1]: Started Ceph mgr.y for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:17.167935+0000 mon.b (mon.1) 79 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:18.180 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:17.167935+0000 mon.b (mon.1) 79 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.051619+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.051619+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.056925+0000 mon.b (mon.1) 80 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.056925+0000 mon.b (mon.1) 80 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.058044+0000 mon.a (mon.0) 969 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.058044+0000 mon.a (mon.0) 969 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.059845+0000 mon.a (mon.0) 970 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.059845+0000 mon.a (mon.0) 970 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.061329+0000 mon.b (mon.1) 81 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.061329+0000 mon.b (mon.1) 81 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.061949+0000 mon.a (mon.0) 971 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]': finished 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.061949+0000 mon.a (mon.0) 971 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]': finished 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.062565+0000 mon.b (mon.1) 82 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.062565+0000 mon.b (mon.1) 82 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.062906+0000 mon.b (mon.1) 83 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.062906+0000 mon.b (mon.1) 83 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.064160+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.181 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:18 vm06 bash[27746]: audit 2026-03-08T23:11:18.064160+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:17.167935+0000 mon.b (mon.1) 79 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:17.167935+0000 mon.b (mon.1) 79 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.051619+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.051619+0000 mon.a (mon.0) 968 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.056925+0000 mon.b (mon.1) 80 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.056925+0000 mon.b (mon.1) 80 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.058044+0000 mon.a (mon.0) 969 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.058044+0000 mon.a (mon.0) 969 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.059845+0000 mon.a (mon.0) 970 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.059845+0000 mon.a (mon.0) 970 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.061329+0000 mon.b (mon.1) 81 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.404 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.061329+0000 mon.b (mon.1) 81 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.061949+0000 mon.a (mon.0) 971 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]': finished 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.061949+0000 mon.a (mon.0) 971 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "auth get-or-create-pending", "entity": "mgr.x", "format": "json"}]': finished 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.062565+0000 mon.b (mon.1) 82 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.062565+0000 mon.b (mon.1) 82 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.062906+0000 mon.b (mon.1) 83 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.062906+0000 mon.b (mon.1) 83 : audit [DBG] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.064160+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.405 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:18 vm11 bash[23232]: audit 2026-03-08T23:11:18.064160+0000 mon.a (mon.0) 972 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.x", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-08T23:11:18.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 bash[62949]: debug 2026-03-08T23:11:18.179+0000 7f3716d53640 1 -- 192.168.123.106:0/383146773 <== mon.0 v2:192.168.123.106:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x55da88c5f4a0 con 0x55da88c61400 2026-03-08T23:11:18.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 bash[62949]: debug 2026-03-08T23:11:18.179+0000 7f3716d53640 0 cephx client: could not set rotating key: decode_decrypt failed. error:bad magic in decode_decrypt, 7262429350009968542 != 18374858748799134293 2026-03-08T23:11:18.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 bash[62949]: debug 2026-03-08T23:11:18.247+0000 7f37195be140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:11:18.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 bash[62949]: debug 2026-03-08T23:11:18.279+0000 7f37195be140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:11:18.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 bash[62949]: debug 2026-03-08T23:11:18.399+0000 7f37195be140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: debug 2026-03-08T23:11:18.518+0000 7fee904ec640 -1 mgr.server reply reply (13) Permission denied access denied: does your client key have mgr caps? See http://docs.ceph.com/en/latest/mgr/administrator/#client-authentication 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: debug 2026-03-08T23:11:18.522+0000 7fee8d4e6640 -1 log_channel(cephadm) log [ERR] : Non-zero return from ['ceph', '-k', '/var/lib/ceph/mgr/ceph-x/keyring', '-n', 'mgr.x', 'tell', 'mgr.x', 'rotate-key', '-i', '-']: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/4153398379 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 msgr2=0x7feac410cc90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- 192.168.123.111:0/4153398379 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 0x7feac410cc90 secure :-1 s=READY pgs=113 cs=0 l=1 rev1=1 crypto rx=0x7feab4009a80 tx=0x7feab402f270 comp rx=0 tx=0).stop 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/4153398379 shutdown_connections 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- 192.168.123.111:0/4153398379 >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7feac410d1d0 0x7feac4113a60 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- 192.168.123.111:0/4153398379 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 0x7feac410cc90 unknown :-1 s=CLOSED pgs=113 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- 192.168.123.111:0/4153398379 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7feac410b8f0 0x7feac410bd10 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/4153398379 >> 192.168.123.111:0/4153398379 conn(0x7feac4071bf0 msgr2=0x7feac4074030 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/4153398379 shutdown_connections 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/4153398379 wait complete. 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 Processor -- start 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- start start 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7feac410b8f0 0x7feac41a0ca0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 0x7feac41a11e0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 --2- >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7feac410d1d0 0x7feac41a8310 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- --> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] -- mon_getmap magic: 0 -- 0x7feac4114990 con 0x7feac410d1d0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- --> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] -- mon_getmap magic: 0 -- 0x7feac4114810 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- --> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] -- mon_getmap magic: 0 -- 0x7feac4114b10 con 0x7feac410b8f0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 --2- >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 0x7feac41a11e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac95a2640 1 --2- >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7feac410b8f0 0x7feac41a0ca0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 --2- >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 0x7feac41a11e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.111:3300/0 says I am v2:192.168.123.111:56892/0 (socket says 192.168.123.111:56892) 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 -- 192.168.123.111:0/2794649027 learned_addr learned my addr 192.168.123.111:0/2794649027 (peer_addr_for_me v2:192.168.123.111:0/0) 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac9da3640 1 --2- 192.168.123.111:0/2794649027 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7feac410d1d0 0x7feac41a8310 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 -- 192.168.123.111:0/2794649027 >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7feac410b8f0 msgr2=0x7feac41a0ca0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 --2- 192.168.123.111:0/2794649027 >> [v2:192.168.123.106:3301/0,v1:192.168.123.106:6790/0] conn(0x7feac410b8f0 0x7feac41a0ca0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 -- 192.168.123.111:0/2794649027 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7feac410d1d0 msgr2=0x7feac41a8310 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 --2- 192.168.123.111:0/2794649027 >> [v2:192.168.123.106:3300/0,v1:192.168.123.106:6789/0] conn(0x7feac410d1d0 0x7feac41a8310 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 -- 192.168.123.111:0/2794649027 --> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7feac41a8a10 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feac8da1640 1 --2- 192.168.123.111:0/2794649027 >> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] conn(0x7feac410c810 0x7feac41a11e0 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7feab40099a0 tx=0x7feab4038720 comp rx=0 tx=0).ready entity=mon.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mon.1 v2:192.168.123.111:3300/0 1 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feab4030e50 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 --> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] -- auth(proto 2 2 bytes epoch 0) -- 0x7fea9c0037e0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mon.1 v2:192.168.123.111:3300/0 2 ==== config(39 keys) ==== 1702+0+0 (secure 0 0 0) 0x7feab4002df0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mon.1 v2:192.168.123.111:3300/0 3 ==== mon_map magic: 0 ==== 413+0+0 (secure 0 0 0) 0x7feab404f650 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/2794649027 --> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7feac41a8ca0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.514+0000 7feacb82d640 1 -- 192.168.123.111:0/2794649027 --> [v2:192.168.123.111:3300/0,v1:192.168.123.111:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7feac41a90d0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mon.1 v2:192.168.123.111:3300/0 4 ==== auth_reply(proto 2 0 (0) Success) ==== 194+0+0 (secure 0 0 0) 0x7feac41a90d0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feab27fc640 0 cephx client: could not set rotating key: decode_decrypt failed. error:bad magic in decode_decrypt, 13511294167931587713 != 18374858748799134293 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mon.1 v2:192.168.123.111:3300/0 5 ==== mgrmap(e 26) ==== 100060+0+0 (secure 0 0 0) 0x7feab40366c0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feab27fc640 1 --2- 192.168.123.111:0/2794649027 >> v2:192.168.123.111:6816/2220409749 conn(0x7fea9c077d30 0x7fea9c07a1f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feacb82d640 1 -- 192.168.123.111:0/2794649027 --> v2:192.168.123.111:6816/2220409749 -- command(tid 0: {"prefix": "get_command_descriptions"}) -- 0x7fea8c000d10 con 0x7fea9c077d30 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mon.1 v2:192.168.123.111:3300/0 6 ==== osd_map(69..69 src has 1..69) ==== 6396+0+0 (secure 0 0 0) 0x7feab40c80d0 con 0x7feac410c810 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feac95a2640 1 --2- 192.168.123.111:0/2794649027 >> v2:192.168.123.111:6816/2220409749 conn(0x7fea9c077d30 0x7fea9c07a1f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feac95a2640 1 --2- 192.168.123.111:0/2794649027 >> v2:192.168.123.111:6816/2220409749 conn(0x7fea9c077d30 0x7fea9c07a1f0 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7feac41a1b80 tx=0x7feab80073d0 comp rx=0 tx=0).ready entity=mgr.24448 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: 2026-03-08T23:11:18.518+0000 7feab27fc640 1 -- 192.168.123.111:0/2794649027 <== mgr.24448 v2:192.168.123.111:6816/2220409749 1 ==== command_reply(tid 0: -13 access denied: does your client key have mgr caps? See http://docs.ceph.com/en/latest/mgr/administrator/#client-authentication) ==== 134+0+0 (secure 0 0 0) 0x7fea8c000d10 con 0x7fea9c077d30 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: Error EACCES: problem getting command descriptions from mgr.x 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: debug 2026-03-08T23:11:18.602+0000 7feecbb77640 -1 mgr handle_mgr_map I was active but no longer am 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: ignoring --setuser ceph since I am not root 2026-03-08T23:11:18.715 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: ignoring --setgroup ceph since I am not root 2026-03-08T23:11:19.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:18 vm06 bash[62949]: debug 2026-03-08T23:11:18.711+0000 7f37195be140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:11:19.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: debug 2026-03-08T23:11:18.710+0000 7f5523777140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-08T23:11:19.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: debug 2026-03-08T23:11:18.746+0000 7f5523777140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-08T23:11:19.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:18 vm11 bash[24047]: debug 2026-03-08T23:11:18.862+0000 7f5523777140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:17.684085+0000 mgr.x (mgr.24448) 28 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:17.684085+0000 mgr.x (mgr.24448) 28 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cephadm 2026-03-08T23:11:18.056798+0000 mgr.x (mgr.24448) 29 : cephadm [INF] Rotating authentication key for mgr.x 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cephadm 2026-03-08T23:11:18.056798+0000 mgr.x (mgr.24448) 29 : cephadm [INF] Rotating authentication key for mgr.x 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cephadm 2026-03-08T23:11:18.063367+0000 mgr.x (mgr.24448) 30 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cephadm 2026-03-08T23:11:18.063367+0000 mgr.x (mgr.24448) 30 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.449530+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.449530+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.457471+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.457471+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.530131+0000 mon.b (mon.1) 84 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.530131+0000 mon.b (mon.1) 84 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.532678+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.532678+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:18.533269+0000 mon.a (mon.0) 976 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:18.533269+0000 mon.a (mon.0) 976 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.595944+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: audit 2026-03-08T23:11:18.595944+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:18.600791+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:18.600791+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:18.601367+0000 mon.a (mon.0) 979 : cluster [DBG] mgrmap e27: y(active, starting, since 0.0680848s) 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:19 vm06 bash[20625]: cluster 2026-03-08T23:11:18.601367+0000 mon.a (mon.0) 979 : cluster [DBG] mgrmap e27: y(active, starting, since 0.0680848s) 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.199+0000 7f37195be140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.299+0000 7f37195be140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:11:19.529 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: from numpy import show_config as show_numpy_config 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.427+0000 7f37195be140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:17.684085+0000 mgr.x (mgr.24448) 28 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:17.684085+0000 mgr.x (mgr.24448) 28 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cephadm 2026-03-08T23:11:18.056798+0000 mgr.x (mgr.24448) 29 : cephadm [INF] Rotating authentication key for mgr.x 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cephadm 2026-03-08T23:11:18.056798+0000 mgr.x (mgr.24448) 29 : cephadm [INF] Rotating authentication key for mgr.x 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cephadm 2026-03-08T23:11:18.063367+0000 mgr.x (mgr.24448) 30 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cephadm 2026-03-08T23:11:18.063367+0000 mgr.x (mgr.24448) 30 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.449530+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.449530+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.457471+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.457471+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.530131+0000 mon.b (mon.1) 84 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.530131+0000 mon.b (mon.1) 84 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.532678+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.532678+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:18.533269+0000 mon.a (mon.0) 976 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:18.533269+0000 mon.a (mon.0) 976 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.595944+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: audit 2026-03-08T23:11:18.595944+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:18.600791+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:18.600791+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:18.601367+0000 mon.a (mon.0) 979 : cluster [DBG] mgrmap e27: y(active, starting, since 0.0680848s) 2026-03-08T23:11:19.530 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:19 vm06 bash[27746]: cluster 2026-03-08T23:11:18.601367+0000 mon.a (mon.0) 979 : cluster [DBG] mgrmap e27: y(active, starting, since 0.0680848s) 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:17.684085+0000 mgr.x (mgr.24448) 28 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:17.684085+0000 mgr.x (mgr.24448) 28 : cluster [DBG] pgmap v11: 132 pgs: 132 active+clean; 455 KiB data, 216 MiB used, 160 GiB / 160 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cephadm 2026-03-08T23:11:18.056798+0000 mgr.x (mgr.24448) 29 : cephadm [INF] Rotating authentication key for mgr.x 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cephadm 2026-03-08T23:11:18.056798+0000 mgr.x (mgr.24448) 29 : cephadm [INF] Rotating authentication key for mgr.x 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cephadm 2026-03-08T23:11:18.063367+0000 mgr.x (mgr.24448) 30 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cephadm 2026-03-08T23:11:18.063367+0000 mgr.x (mgr.24448) 30 : cephadm [INF] Reconfiguring daemon mgr.x on vm11 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.449530+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.449530+0000 mon.a (mon.0) 973 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.457471+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.457471+0000 mon.a (mon.0) 974 : audit [INF] from='mgr.24448 ' entity='mgr.x' 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.530131+0000 mon.b (mon.1) 84 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.530131+0000 mon.b (mon.1) 84 : audit [INF] from='mgr.24448 192.168.123.111:0/2829816899' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.532678+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.532678+0000 mon.a (mon.0) 975 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd=[{"prefix": "mgr fail", "who": "x"}]: dispatch 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:18.533269+0000 mon.a (mon.0) 976 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:18.533269+0000 mon.a (mon.0) 976 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.595944+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: audit 2026-03-08T23:11:18.595944+0000 mon.a (mon.0) 977 : audit [INF] from='mgr.24448 ' entity='mgr.x' cmd='[{"prefix": "mgr fail", "who": "x"}]': finished 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:18.600791+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:18.600791+0000 mon.a (mon.0) 978 : cluster [DBG] osdmap e70: 8 total, 8 up, 8 in 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:18.601367+0000 mon.a (mon.0) 979 : cluster [DBG] mgrmap e27: y(active, starting, since 0.0680848s) 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:19 vm11 bash[23232]: cluster 2026-03-08T23:11:18.601367+0000 mon.a (mon.0) 979 : cluster [DBG] mgrmap e27: y(active, starting, since 0.0680848s) 2026-03-08T23:11:19.558 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: debug 2026-03-08T23:11:19.158+0000 7f5523777140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-08T23:11:20.014 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: debug 2026-03-08T23:11:19.650+0000 7f5523777140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-08T23:11:20.014 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: debug 2026-03-08T23:11:19.734+0000 7f5523777140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-08T23:11:20.014 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-08T23:11:20.014 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-08T23:11:20.014 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: from numpy import show_config as show_numpy_config 2026-03-08T23:11:20.014 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:19 vm11 bash[24047]: debug 2026-03-08T23:11:19.866+0000 7f5523777140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-08T23:11:20.028 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.567+0000 7f37195be140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:11:20.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.603+0000 7f37195be140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:11:20.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.643+0000 7f37195be140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:11:20.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.687+0000 7f37195be140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:11:20.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:19 vm06 bash[62949]: debug 2026-03-08T23:11:19.739+0000 7f37195be140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:11:20.308 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.010+0000 7f5523777140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-08T23:11:20.308 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.046+0000 7f5523777140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-08T23:11:20.308 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.082+0000 7f5523777140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-08T23:11:20.308 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.126+0000 7f5523777140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-08T23:11:20.308 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.178+0000 7f5523777140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-08T23:11:20.450 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.187+0000 7f37195be140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:11:20.450 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.223+0000 7f37195be140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:11:20.451 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.259+0000 7f37195be140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:11:20.451 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.403+0000 7f37195be140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:11:20.771 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.447+0000 7f37195be140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:11:20.771 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.491+0000 7f37195be140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:11:20.771 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.603+0000 7f37195be140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:11:20.874 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.606+0000 7f5523777140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-08T23:11:20.874 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.642+0000 7f5523777140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-08T23:11:20.875 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.678+0000 7f5523777140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-08T23:11:20.875 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.826+0000 7f5523777140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-08T23:11:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.767+0000 7f37195be140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:11:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.955+0000 7f37195be140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:11:21.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:20 vm06 bash[62949]: debug 2026-03-08T23:11:20.991+0000 7f37195be140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:11:21.189 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.870+0000 7f5523777140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-08T23:11:21.190 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:20 vm11 bash[24047]: debug 2026-03-08T23:11:20.910+0000 7f5523777140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-08T23:11:21.190 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.022+0000 7f5523777140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:11:21.423 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: debug 2026-03-08T23:11:21.035+0000 7f37195be140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:11:21.423 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: debug 2026-03-08T23:11:21.187+0000 7f37195be140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:11:21.453 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.182+0000 7f5523777140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-08T23:11:21.453 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.362+0000 7f5523777140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-08T23:11:21.453 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.398+0000 7f5523777140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.426718+0000 mon.a (mon.0) 980 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.426718+0000 mon.a (mon.0) 980 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.426976+0000 mon.a (mon.0) 981 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.426976+0000 mon.a (mon.0) 981 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.430157+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.? 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.430157+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.? 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.433440+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.433440+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.435053+0000 mon.a (mon.0) 983 : cluster [DBG] mgrmap e28: y(active, starting, since 0.00813832s) 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: cluster 2026-03-08T23:11:21.435053+0000 mon.a (mon.0) 983 : cluster [DBG] mgrmap e28: y(active, starting, since 0.00813832s) 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.445911+0000 mon.b (mon.1) 85 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.445911+0000 mon.b (mon.1) 85 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446114+0000 mon.b (mon.1) 86 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446114+0000 mon.b (mon.1) 86 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446344+0000 mon.b (mon.1) 87 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446344+0000 mon.b (mon.1) 87 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446540+0000 mon.b (mon.1) 88 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446540+0000 mon.b (mon.1) 88 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446775+0000 mon.b (mon.1) 89 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.446775+0000 mon.b (mon.1) 89 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.447371+0000 mon.b (mon.1) 90 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.447371+0000 mon.b (mon.1) 90 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:21.686 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.447575+0000 mon.b (mon.1) 91 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.447575+0000 mon.b (mon.1) 91 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.447809+0000 mon.b (mon.1) 92 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.447809+0000 mon.b (mon.1) 92 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448032+0000 mon.b (mon.1) 93 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448032+0000 mon.b (mon.1) 93 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448266+0000 mon.b (mon.1) 94 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448266+0000 mon.b (mon.1) 94 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448515+0000 mon.b (mon.1) 95 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448515+0000 mon.b (mon.1) 95 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448741+0000 mon.b (mon.1) 96 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448741+0000 mon.b (mon.1) 96 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448884+0000 mon.b (mon.1) 97 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.448884+0000 mon.b (mon.1) 97 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449209+0000 mon.b (mon.1) 98 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449209+0000 mon.b (mon.1) 98 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449386+0000 mon.b (mon.1) 99 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449386+0000 mon.b (mon.1) 99 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449587+0000 mon.b (mon.1) 100 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449587+0000 mon.b (mon.1) 100 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449948+0000 mon.b (mon.1) 101 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.449948+0000 mon.b (mon.1) 101 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.450167+0000 mon.b (mon.1) 102 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.450167+0000 mon.b (mon.1) 102 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.450999+0000 mon.b (mon.1) 103 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:21 vm06 bash[20625]: audit 2026-03-08T23:11:21.450999+0000 mon.b (mon.1) 103 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: debug 2026-03-08T23:11:21.419+0000 7f37195be140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Bus STARTING 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: CherryPy Checker: 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: The Application mounted at '' has an empty config. 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Serving on http://:::9283 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Bus STARTED 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Bus STOPPING 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Bus STOPPED 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Bus STARTING 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.426718+0000 mon.a (mon.0) 980 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.426718+0000 mon.a (mon.0) 980 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.426976+0000 mon.a (mon.0) 981 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.426976+0000 mon.a (mon.0) 981 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.430157+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.? 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.430157+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.? 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.433440+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-08T23:11:21.687 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.433440+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.435053+0000 mon.a (mon.0) 983 : cluster [DBG] mgrmap e28: y(active, starting, since 0.00813832s) 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: cluster 2026-03-08T23:11:21.435053+0000 mon.a (mon.0) 983 : cluster [DBG] mgrmap e28: y(active, starting, since 0.00813832s) 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.445911+0000 mon.b (mon.1) 85 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.445911+0000 mon.b (mon.1) 85 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446114+0000 mon.b (mon.1) 86 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446114+0000 mon.b (mon.1) 86 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446344+0000 mon.b (mon.1) 87 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446344+0000 mon.b (mon.1) 87 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446540+0000 mon.b (mon.1) 88 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446540+0000 mon.b (mon.1) 88 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446775+0000 mon.b (mon.1) 89 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.446775+0000 mon.b (mon.1) 89 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.447371+0000 mon.b (mon.1) 90 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.447371+0000 mon.b (mon.1) 90 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.447575+0000 mon.b (mon.1) 91 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.447575+0000 mon.b (mon.1) 91 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.447809+0000 mon.b (mon.1) 92 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.447809+0000 mon.b (mon.1) 92 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448032+0000 mon.b (mon.1) 93 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448032+0000 mon.b (mon.1) 93 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448266+0000 mon.b (mon.1) 94 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448266+0000 mon.b (mon.1) 94 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448515+0000 mon.b (mon.1) 95 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448515+0000 mon.b (mon.1) 95 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448741+0000 mon.b (mon.1) 96 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448741+0000 mon.b (mon.1) 96 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448884+0000 mon.b (mon.1) 97 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.448884+0000 mon.b (mon.1) 97 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449209+0000 mon.b (mon.1) 98 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449209+0000 mon.b (mon.1) 98 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449386+0000 mon.b (mon.1) 99 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449386+0000 mon.b (mon.1) 99 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449587+0000 mon.b (mon.1) 100 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449587+0000 mon.b (mon.1) 100 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449948+0000 mon.b (mon.1) 101 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.449948+0000 mon.b (mon.1) 101 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.450167+0000 mon.b (mon.1) 102 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.450167+0000 mon.b (mon.1) 102 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.450999+0000 mon.b (mon.1) 103 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:21.688 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:21 vm06 bash[27746]: audit 2026-03-08T23:11:21.450999+0000 mon.b (mon.1) 103 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.426718+0000 mon.a (mon.0) 980 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.426718+0000 mon.a (mon.0) 980 : cluster [INF] Active manager daemon y restarted 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.426976+0000 mon.a (mon.0) 981 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.426976+0000 mon.a (mon.0) 981 : cluster [INF] Activating manager daemon y 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.430157+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.? 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.430157+0000 mon.c (mon.2) 171 : audit [DBG] from='mgr.? 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.713 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.433440+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.433440+0000 mon.a (mon.0) 982 : cluster [DBG] osdmap e71: 8 total, 8 up, 8 in 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.435053+0000 mon.a (mon.0) 983 : cluster [DBG] mgrmap e28: y(active, starting, since 0.00813832s) 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: cluster 2026-03-08T23:11:21.435053+0000 mon.a (mon.0) 983 : cluster [DBG] mgrmap e28: y(active, starting, since 0.00813832s) 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.445911+0000 mon.b (mon.1) 85 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.445911+0000 mon.b (mon.1) 85 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/crt"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446114+0000 mon.b (mon.1) 86 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446114+0000 mon.b (mon.1) 86 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446344+0000 mon.b (mon.1) 87 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446344+0000 mon.b (mon.1) 87 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446540+0000 mon.b (mon.1) 88 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446540+0000 mon.b (mon.1) 88 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446775+0000 mon.b (mon.1) 89 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.446775+0000 mon.b (mon.1) 89 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "y", "id": "y"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.447371+0000 mon.b (mon.1) 90 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.447371+0000 mon.b (mon.1) 90 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.447575+0000 mon.b (mon.1) 91 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.447575+0000 mon.b (mon.1) 91 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.447809+0000 mon.b (mon.1) 92 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.447809+0000 mon.b (mon.1) 92 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448032+0000 mon.b (mon.1) 93 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448032+0000 mon.b (mon.1) 93 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448266+0000 mon.b (mon.1) 94 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448266+0000 mon.b (mon.1) 94 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448515+0000 mon.b (mon.1) 95 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448515+0000 mon.b (mon.1) 95 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448741+0000 mon.b (mon.1) 96 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448741+0000 mon.b (mon.1) 96 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 6}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448884+0000 mon.b (mon.1) 97 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.448884+0000 mon.b (mon.1) 97 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata", "id": 7}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449209+0000 mon.b (mon.1) 98 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449209+0000 mon.b (mon.1) 98 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449386+0000 mon.b (mon.1) 99 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449386+0000 mon.b (mon.1) 99 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449587+0000 mon.b (mon.1) 100 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449587+0000 mon.b (mon.1) 100 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449948+0000 mon.b (mon.1) 101 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.449948+0000 mon.b (mon.1) 101 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.450167+0000 mon.b (mon.1) 102 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.450167+0000 mon.b (mon.1) 102 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/y/key"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.450999+0000 mon.b (mon.1) 103 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:21 vm11 bash[23232]: audit 2026-03-08T23:11:21.450999+0000 mon.b (mon.1) 103 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.446+0000 7f5523777140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-08T23:11:21.714 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.602+0000 7f5523777140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-08T23:11:22.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Serving on http://:::9283 2026-03-08T23:11:22.029 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:21 vm06 bash[62949]: [08/Mar/2026:23:11:21] ENGINE Bus STARTED 2026-03-08T23:11:22.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: debug 2026-03-08T23:11:21.850+0000 7f5523777140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-08T23:11:22.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: [08/Mar/2026:23:11:21] ENGINE Bus STARTING 2026-03-08T23:11:22.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: CherryPy Checker: 2026-03-08T23:11:22.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: The Application mounted at '' has an empty config. 2026-03-08T23:11:22.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: [08/Mar/2026:23:11:21] ENGINE Serving on http://:::9283 2026-03-08T23:11:22.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:21 vm11 bash[24047]: [08/Mar/2026:23:11:21] ENGINE Bus STARTED 2026-03-08T23:11:22.094 INFO:teuthology.orchestra.run.vm06.stderr:++ ceph auth get-key mgr.x 2026-03-08T23:11:22.287 INFO:teuthology.orchestra.run.vm06.stderr:+ NK=AQAWAq5pl4eSAxAAyAQzu27gwlo5bcxyr/68ug== 2026-03-08T23:11:22.288 INFO:teuthology.orchestra.run.vm06.stderr:+ '[' AQD8/q1pTNWwLxAA4MPOr7leog04IIBUGWKCow== == AQAWAq5pl4eSAxAAyAQzu27gwlo5bcxyr/68ug== ']' 2026-03-08T23:11:22.354 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-08T23:11:22.356 INFO:tasks.cephadm:Teardown begin 2026-03-08T23:11:22.356 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:22.365 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:11:22.375 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-08T23:11:22.375 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 -- ceph mgr module disable cephadm 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: cluster 2026-03-08T23:11:21.630982+0000 mon.a (mon.0) 984 : cluster [INF] Manager daemon y is now available 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: cluster 2026-03-08T23:11:21.630982+0000 mon.a (mon.0) 984 : cluster [INF] Manager daemon y is now available 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.665294+0000 mon.b (mon.1) 104 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.665294+0000 mon.b (mon.1) 104 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.665624+0000 mon.b (mon.1) 105 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.665624+0000 mon.b (mon.1) 105 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.666012+0000 mon.b (mon.1) 106 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.666012+0000 mon.b (mon.1) 106 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.668448+0000 mon.a (mon.0) 985 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.668448+0000 mon.a (mon.0) 985 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.707627+0000 mon.b (mon.1) 107 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.707627+0000 mon.b (mon.1) 107 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.710056+0000 mon.a (mon.0) 986 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.710056+0000 mon.a (mon.0) 986 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.860726+0000 mon.b (mon.1) 108 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.860726+0000 mon.b (mon.1) 108 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: cluster 2026-03-08T23:11:21.860746+0000 mon.a (mon.0) 987 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: cluster 2026-03-08T23:11:21.860746+0000 mon.a (mon.0) 987 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.861280+0000 mon.b (mon.1) 109 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.861280+0000 mon.b (mon.1) 109 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.861928+0000 mon.b (mon.1) 110 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:11:22.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.861928+0000 mon.b (mon.1) 110 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:11:22.716 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.862298+0000 mon.b (mon.1) 111 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:22.716 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:21.862298+0000 mon.b (mon.1) 111 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:22.716 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:22.279548+0000 mon.a (mon.0) 988 : audit [INF] from='client.? 192.168.123.106:0/3449915920' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:22.716 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:22 vm11 bash[23232]: audit 2026-03-08T23:11:22.279548+0000 mon.a (mon.0) 988 : audit [INF] from='client.? 192.168.123.106:0/3449915920' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: cluster 2026-03-08T23:11:21.630982+0000 mon.a (mon.0) 984 : cluster [INF] Manager daemon y is now available 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: cluster 2026-03-08T23:11:21.630982+0000 mon.a (mon.0) 984 : cluster [INF] Manager daemon y is now available 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.665294+0000 mon.b (mon.1) 104 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.665294+0000 mon.b (mon.1) 104 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.665624+0000 mon.b (mon.1) 105 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.665624+0000 mon.b (mon.1) 105 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.666012+0000 mon.b (mon.1) 106 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.666012+0000 mon.b (mon.1) 106 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.668448+0000 mon.a (mon.0) 985 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.668448+0000 mon.a (mon.0) 985 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.707627+0000 mon.b (mon.1) 107 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.707627+0000 mon.b (mon.1) 107 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.710056+0000 mon.a (mon.0) 986 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.710056+0000 mon.a (mon.0) 986 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.860726+0000 mon.b (mon.1) 108 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.860726+0000 mon.b (mon.1) 108 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: cluster 2026-03-08T23:11:21.860746+0000 mon.a (mon.0) 987 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: cluster 2026-03-08T23:11:21.860746+0000 mon.a (mon.0) 987 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.861280+0000 mon.b (mon.1) 109 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.861280+0000 mon.b (mon.1) 109 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:22.779 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.861928+0000 mon.b (mon.1) 110 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.861928+0000 mon.b (mon.1) 110 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.862298+0000 mon.b (mon.1) 111 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:21.862298+0000 mon.b (mon.1) 111 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:22.279548+0000 mon.a (mon.0) 988 : audit [INF] from='client.? 192.168.123.106:0/3449915920' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:22 vm06 bash[20625]: audit 2026-03-08T23:11:22.279548+0000 mon.a (mon.0) 988 : audit [INF] from='client.? 192.168.123.106:0/3449915920' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: cluster 2026-03-08T23:11:21.630982+0000 mon.a (mon.0) 984 : cluster [INF] Manager daemon y is now available 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: cluster 2026-03-08T23:11:21.630982+0000 mon.a (mon.0) 984 : cluster [INF] Manager daemon y is now available 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.665294+0000 mon.b (mon.1) 104 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.665294+0000 mon.b (mon.1) 104 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.665624+0000 mon.b (mon.1) 105 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.665624+0000 mon.b (mon.1) 105 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.666012+0000 mon.b (mon.1) 106 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.666012+0000 mon.b (mon.1) 106 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.668448+0000 mon.a (mon.0) 985 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.668448+0000 mon.a (mon.0) 985 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/mirror_snapshot_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.707627+0000 mon.b (mon.1) 107 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.707627+0000 mon.b (mon.1) 107 : audit [INF] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.710056+0000 mon.a (mon.0) 986 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.710056+0000 mon.a (mon.0) 986 : audit [INF] from='mgr.25109 ' entity='mgr.y' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/y/trash_purge_schedule"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.860726+0000 mon.b (mon.1) 108 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.860726+0000 mon.b (mon.1) 108 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/crt"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: cluster 2026-03-08T23:11:21.860746+0000 mon.a (mon.0) 987 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: cluster 2026-03-08T23:11:21.860746+0000 mon.a (mon.0) 987 : cluster [DBG] Standby manager daemon x started 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.861280+0000 mon.b (mon.1) 109 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.861280+0000 mon.b (mon.1) 109 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.861928+0000 mon.b (mon.1) 110 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.861928+0000 mon.b (mon.1) 110 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/x/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.862298+0000 mon.b (mon.1) 111 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:21.862298+0000 mon.b (mon.1) 111 : audit [DBG] from='mgr.? 192.168.123.111:0/897400430' entity='mgr.x' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:22.279548+0000 mon.a (mon.0) 988 : audit [INF] from='client.? 192.168.123.106:0/3449915920' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:22.780 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:22 vm06 bash[27746]: audit 2026-03-08T23:11:22.279548+0000 mon.a (mon.0) 988 : audit [INF] from='client.? 192.168.123.106:0/3449915920' entity='client.admin' cmd=[{"prefix": "auth get-key", "entity": "mgr.x"}]: dispatch 2026-03-08T23:11:22.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:11:22 vm11 bash[48986]: debug there is no tcmu-runner data available 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cluster 2026-03-08T23:11:22.510543+0000 mon.a (mon.0) 989 : cluster [DBG] mgrmap e29: y(active, since 1.08366s), standbys: x 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cluster 2026-03-08T23:11:22.510543+0000 mon.a (mon.0) 989 : cluster [DBG] mgrmap e29: y(active, since 1.08366s), standbys: x 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: audit 2026-03-08T23:11:22.533328+0000 mon.b (mon.1) 112 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: audit 2026-03-08T23:11:22.533328+0000 mon.b (mon.1) 112 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:22.965601+0000 mgr.y (mgr.25109) 3 : cephadm [INF] [08/Mar/2026:23:11:22] ENGINE Bus STARTING 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:22.965601+0000 mgr.y (mgr.25109) 3 : cephadm [INF] [08/Mar/2026:23:11:22] ENGINE Bus STARTING 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.066910+0000 mgr.y (mgr.25109) 4 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.066910+0000 mgr.y (mgr.25109) 4 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.174942+0000 mgr.y (mgr.25109) 5 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.174942+0000 mgr.y (mgr.25109) 5 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.175092+0000 mgr.y (mgr.25109) 6 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Bus STARTED 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.175092+0000 mgr.y (mgr.25109) 6 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Bus STARTED 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.175422+0000 mgr.y (mgr.25109) 7 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Client ('192.168.123.106', 39776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:23.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:23 vm11 bash[23232]: cephadm 2026-03-08T23:11:23.175422+0000 mgr.y (mgr.25109) 7 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Client ('192.168.123.106', 39776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cluster 2026-03-08T23:11:22.510543+0000 mon.a (mon.0) 989 : cluster [DBG] mgrmap e29: y(active, since 1.08366s), standbys: x 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cluster 2026-03-08T23:11:22.510543+0000 mon.a (mon.0) 989 : cluster [DBG] mgrmap e29: y(active, since 1.08366s), standbys: x 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: audit 2026-03-08T23:11:22.533328+0000 mon.b (mon.1) 112 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: audit 2026-03-08T23:11:22.533328+0000 mon.b (mon.1) 112 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:22.965601+0000 mgr.y (mgr.25109) 3 : cephadm [INF] [08/Mar/2026:23:11:22] ENGINE Bus STARTING 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:22.965601+0000 mgr.y (mgr.25109) 3 : cephadm [INF] [08/Mar/2026:23:11:22] ENGINE Bus STARTING 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.066910+0000 mgr.y (mgr.25109) 4 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.066910+0000 mgr.y (mgr.25109) 4 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.174942+0000 mgr.y (mgr.25109) 5 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.174942+0000 mgr.y (mgr.25109) 5 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.175092+0000 mgr.y (mgr.25109) 6 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Bus STARTED 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.175092+0000 mgr.y (mgr.25109) 6 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Bus STARTED 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.175422+0000 mgr.y (mgr.25109) 7 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Client ('192.168.123.106', 39776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:23 vm06 bash[20625]: cephadm 2026-03-08T23:11:23.175422+0000 mgr.y (mgr.25109) 7 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Client ('192.168.123.106', 39776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cluster 2026-03-08T23:11:22.510543+0000 mon.a (mon.0) 989 : cluster [DBG] mgrmap e29: y(active, since 1.08366s), standbys: x 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cluster 2026-03-08T23:11:22.510543+0000 mon.a (mon.0) 989 : cluster [DBG] mgrmap e29: y(active, since 1.08366s), standbys: x 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: audit 2026-03-08T23:11:22.533328+0000 mon.b (mon.1) 112 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: audit 2026-03-08T23:11:22.533328+0000 mon.b (mon.1) 112 : audit [DBG] from='mgr.25109 192.168.123.106:0/2849754449' entity='mgr.y' cmd=[{"prefix": "mgr metadata", "who": "x", "id": "x"}]: dispatch 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:22.965601+0000 mgr.y (mgr.25109) 3 : cephadm [INF] [08/Mar/2026:23:11:22] ENGINE Bus STARTING 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:22.965601+0000 mgr.y (mgr.25109) 3 : cephadm [INF] [08/Mar/2026:23:11:22] ENGINE Bus STARTING 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.066910+0000 mgr.y (mgr.25109) 4 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.066910+0000 mgr.y (mgr.25109) 4 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on http://192.168.123.106:8765 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.174942+0000 mgr.y (mgr.25109) 5 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.174942+0000 mgr.y (mgr.25109) 5 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Serving on https://192.168.123.106:7150 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.175092+0000 mgr.y (mgr.25109) 6 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Bus STARTED 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.175092+0000 mgr.y (mgr.25109) 6 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Bus STARTED 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.175422+0000 mgr.y (mgr.25109) 7 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Client ('192.168.123.106', 39776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:24.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:23 vm06 bash[27746]: cephadm 2026-03-08T23:11:23.175422+0000 mgr.y (mgr.25109) 7 : cephadm [INF] [08/Mar/2026:23:11:23] ENGINE Client ('192.168.123.106', 39776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-08T23:11:24.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:24 vm11 bash[23232]: cluster 2026-03-08T23:11:23.450370+0000 mgr.y (mgr.25109) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:24.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:24 vm11 bash[23232]: cluster 2026-03-08T23:11:23.450370+0000 mgr.y (mgr.25109) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:24.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:24 vm11 bash[23232]: cluster 2026-03-08T23:11:23.550834+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e30: y(active, since 2s), standbys: x 2026-03-08T23:11:24.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:24 vm11 bash[23232]: cluster 2026-03-08T23:11:23.550834+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e30: y(active, since 2s), standbys: x 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:24 vm06 bash[20625]: cluster 2026-03-08T23:11:23.450370+0000 mgr.y (mgr.25109) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:24 vm06 bash[20625]: cluster 2026-03-08T23:11:23.450370+0000 mgr.y (mgr.25109) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:24 vm06 bash[20625]: cluster 2026-03-08T23:11:23.550834+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e30: y(active, since 2s), standbys: x 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:24 vm06 bash[20625]: cluster 2026-03-08T23:11:23.550834+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e30: y(active, since 2s), standbys: x 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:24 vm06 bash[27746]: cluster 2026-03-08T23:11:23.450370+0000 mgr.y (mgr.25109) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:24 vm06 bash[27746]: cluster 2026-03-08T23:11:23.450370+0000 mgr.y (mgr.25109) 8 : cluster [DBG] pgmap v4: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:24 vm06 bash[27746]: cluster 2026-03-08T23:11:23.550834+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e30: y(active, since 2s), standbys: x 2026-03-08T23:11:25.029 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:24 vm06 bash[27746]: cluster 2026-03-08T23:11:23.550834+0000 mon.a (mon.0) 990 : cluster [DBG] mgrmap e30: y(active, since 2s), standbys: x 2026-03-08T23:11:26.629 INFO:teuthology.orchestra.run.vm06.stderr:Inferring config /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/mon.c/config 2026-03-08T23:11:26.786 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-08T23:11:26.786 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-08T23:11:26.787 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-08T23:11:26.787 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-08T23:11:26.787 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-08T23:11:26.787 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-08T23:11:26.787 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-08T23:11:26.783+0000 7f8e43fff640 -1 monclient: keyring not found 2026-03-08T23:11:26.787 INFO:teuthology.orchestra.run.vm06.stderr:[errno 21] error connecting to the cluster 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 bash[20625]: cluster 2026-03-08T23:11:25.450704+0000 mgr.y (mgr.25109) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 bash[20625]: cluster 2026-03-08T23:11:25.450704+0000 mgr.y (mgr.25109) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 bash[20625]: cluster 2026-03-08T23:11:25.553245+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e31: y(active, since 4s), standbys: x 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 bash[20625]: cluster 2026-03-08T23:11:25.553245+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e31: y(active, since 4s), standbys: x 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:26 vm06 bash[27746]: cluster 2026-03-08T23:11:25.450704+0000 mgr.y (mgr.25109) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:26 vm06 bash[27746]: cluster 2026-03-08T23:11:25.450704+0000 mgr.y (mgr.25109) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:26.794 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:26 vm06 bash[27746]: cluster 2026-03-08T23:11:25.553245+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e31: y(active, since 4s), standbys: x 2026-03-08T23:11:26.795 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:26 vm06 bash[27746]: cluster 2026-03-08T23:11:25.553245+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e31: y(active, since 4s), standbys: x 2026-03-08T23:11:26.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:26 vm11 bash[23232]: cluster 2026-03-08T23:11:25.450704+0000 mgr.y (mgr.25109) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:26.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:26 vm11 bash[23232]: cluster 2026-03-08T23:11:25.450704+0000 mgr.y (mgr.25109) 9 : cluster [DBG] pgmap v5: 132 pgs: 132 active+clean; 455 KiB data, 217 MiB used, 160 GiB / 160 GiB avail 2026-03-08T23:11:26.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:26 vm11 bash[23232]: cluster 2026-03-08T23:11:25.553245+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e31: y(active, since 4s), standbys: x 2026-03-08T23:11:26.808 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:26 vm11 bash[23232]: cluster 2026-03-08T23:11:25.553245+0000 mon.a (mon.0) 991 : cluster [DBG] mgrmap e31: y(active, since 4s), standbys: x 2026-03-08T23:11:26.869 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:11:26.869 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-08T23:11:26.869 DEBUG:teuthology.orchestra.run.vm06:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-08T23:11:26.872 DEBUG:teuthology.orchestra.run.vm11:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-08T23:11:26.875 INFO:tasks.cephadm:Stopping all daemons... 2026-03-08T23:11:26.875 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-08T23:11:26.875 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a 2026-03-08T23:11:27.046 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 systemd[1]: Stopping Ceph mon.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:27.046 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 bash[20625]: debug 2026-03-08T23:11:26.979+0000 7fd4a8f2d640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:11:27.046 INFO:journalctl@ceph.mon.a.vm06.stdout:Mar 08 23:11:26 vm06 bash[20625]: debug 2026-03-08T23:11:26.979+0000 7fd4a8f2d640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-08T23:11:27.100 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.a.service' 2026-03-08T23:11:27.116 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:27.116 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-08T23:11:27.116 INFO:tasks.cephadm.mon.b:Stopping mon.c... 2026-03-08T23:11:27.116 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.c 2026-03-08T23:11:27.356 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:27 vm06 systemd[1]: Stopping Ceph mon.c for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:27.356 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:27 vm06 bash[27746]: debug 2026-03-08T23:11:27.207+0000 7f8e8f909640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:11:27.356 INFO:journalctl@ceph.mon.c.vm06.stdout:Mar 08 23:11:27 vm06 bash[27746]: debug 2026-03-08T23:11:27.207+0000 7f8e8f909640 -1 mon.c@2(peon) e3 *** Got Signal Terminated *** 2026-03-08T23:11:27.412 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.c.service' 2026-03-08T23:11:27.424 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:27.424 INFO:tasks.cephadm.mon.b:Stopped mon.c 2026-03-08T23:11:27.424 INFO:tasks.cephadm.mon.b:Stopping mon.b... 2026-03-08T23:11:27.424 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.b 2026-03-08T23:11:27.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:27 vm11 systemd[1]: Stopping Ceph mon.b for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:27.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:27 vm11 bash[23232]: debug 2026-03-08T23:11:27.466+0000 7fd94dd1a640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-08T23:11:27.715 INFO:journalctl@ceph.mon.b.vm11.stdout:Mar 08 23:11:27 vm11 bash[23232]: debug 2026-03-08T23:11:27.466+0000 7fd94dd1a640 -1 mon.b@1(peon) e3 *** Got Signal Terminated *** 2026-03-08T23:11:27.787 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mon.b.service' 2026-03-08T23:11:27.799 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:27.799 INFO:tasks.cephadm.mon.b:Stopped mon.b 2026-03-08T23:11:27.799 INFO:tasks.cephadm.mgr.y:Stopping mgr.y... 2026-03-08T23:11:27.799 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y 2026-03-08T23:11:27.900 INFO:journalctl@ceph.mgr.y.vm06.stdout:Mar 08 23:11:27 vm06 systemd[1]: Stopping Ceph mgr.y for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:28.011 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.y.service' 2026-03-08T23:11:28.023 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:28.023 INFO:tasks.cephadm.mgr.y:Stopped mgr.y 2026-03-08T23:11:28.023 INFO:tasks.cephadm.mgr.x:Stopping mgr.x... 2026-03-08T23:11:28.023 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.x 2026-03-08T23:11:28.058 INFO:journalctl@ceph.mgr.x.vm11.stdout:Mar 08 23:11:28 vm11 systemd[1]: Stopping Ceph mgr.x for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:28.162 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@mgr.x.service' 2026-03-08T23:11:28.219 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:28.219 INFO:tasks.cephadm.mgr.x:Stopped mgr.x 2026-03-08T23:11:28.220 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-08T23:11:28.220 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.0 2026-03-08T23:11:28.529 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:11:28 vm06 systemd[1]: Stopping Ceph osd.0 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:28.529 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:11:28 vm06 bash[30635]: debug 2026-03-08T23:11:28.267+0000 7f3b4db55640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:11:28.529 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:11:28 vm06 bash[30635]: debug 2026-03-08T23:11:28.267+0000 7f3b4db55640 -1 osd.0 71 *** Got signal Terminated *** 2026-03-08T23:11:28.529 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:11:28 vm06 bash[30635]: debug 2026-03-08T23:11:28.267+0000 7f3b4db55640 -1 osd.0 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:11:33.708 INFO:journalctl@ceph.osd.0.vm06.stdout:Mar 08 23:11:33 vm06 bash[63968]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-0 2026-03-08T23:11:33.752 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.0.service' 2026-03-08T23:11:33.764 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:33.764 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-08T23:11:33.764 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-08T23:11:33.764 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.1 2026-03-08T23:11:34.029 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:11:33 vm06 systemd[1]: Stopping Ceph osd.1 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:34.029 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:11:33 vm06 bash[36572]: debug 2026-03-08T23:11:33.851+0000 7f4936f98640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:11:34.029 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:11:33 vm06 bash[36572]: debug 2026-03-08T23:11:33.851+0000 7f4936f98640 -1 osd.1 71 *** Got signal Terminated *** 2026-03-08T23:11:34.029 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:11:33 vm06 bash[36572]: debug 2026-03-08T23:11:33.851+0000 7f4936f98640 -1 osd.1 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:11:39.175 INFO:journalctl@ceph.osd.1.vm06.stdout:Mar 08 23:11:38 vm06 bash[64150]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-1 2026-03-08T23:11:39.225 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.1.service' 2026-03-08T23:11:39.236 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:39.236 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-08T23:11:39.236 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-08T23:11:39.236 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.2 2026-03-08T23:11:39.528 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:11:39 vm06 systemd[1]: Stopping Ceph osd.2 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:39.528 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:11:39 vm06 bash[42800]: debug 2026-03-08T23:11:39.323+0000 7fb7e4aab640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:11:39.528 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:11:39 vm06 bash[42800]: debug 2026-03-08T23:11:39.323+0000 7fb7e4aab640 -1 osd.2 71 *** Got signal Terminated *** 2026-03-08T23:11:39.528 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:11:39 vm06 bash[42800]: debug 2026-03-08T23:11:39.323+0000 7fb7e4aab640 -1 osd.2 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:11:44.648 INFO:journalctl@ceph.osd.2.vm06.stdout:Mar 08 23:11:44 vm06 bash[64343]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-2 2026-03-08T23:11:44.693 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.2.service' 2026-03-08T23:11:44.703 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:44.703 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-08T23:11:44.703 INFO:tasks.cephadm.osd.3:Stopping osd.3... 2026-03-08T23:11:44.703 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.3 2026-03-08T23:11:45.028 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:11:44 vm06 systemd[1]: Stopping Ceph osd.3 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:45.029 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:11:44 vm06 bash[48703]: debug 2026-03-08T23:11:44.787+0000 7fb874cb1640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.3 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:11:45.029 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:11:44 vm06 bash[48703]: debug 2026-03-08T23:11:44.787+0000 7fb874cb1640 -1 osd.3 71 *** Got signal Terminated *** 2026-03-08T23:11:45.029 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:11:44 vm06 bash[48703]: debug 2026-03-08T23:11:44.787+0000 7fb874cb1640 -1 osd.3 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:11:50.130 INFO:journalctl@ceph.osd.3.vm06.stdout:Mar 08 23:11:49 vm06 bash[64536]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-3 2026-03-08T23:11:50.170 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.3.service' 2026-03-08T23:11:50.180 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:50.180 INFO:tasks.cephadm.osd.3:Stopped osd.3 2026-03-08T23:11:50.180 INFO:tasks.cephadm.osd.4:Stopping osd.4... 2026-03-08T23:11:50.180 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.4 2026-03-08T23:11:50.558 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:11:50 vm11 systemd[1]: Stopping Ceph osd.4 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:50.558 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:11:50 vm11 bash[26565]: debug 2026-03-08T23:11:50.222+0000 7ffb6b022640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.4 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:11:50.558 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:11:50 vm11 bash[26565]: debug 2026-03-08T23:11:50.222+0000 7ffb6b022640 -1 osd.4 71 *** Got signal Terminated *** 2026-03-08T23:11:50.558 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:11:50 vm11 bash[26565]: debug 2026-03-08T23:11:50.222+0000 7ffb6b022640 -1 osd.4 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:11:54.058 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:53 vm11 bash[38325]: debug 2026-03-08T23:11:53.738+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:11:55.058 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:54 vm11 bash[38325]: debug 2026-03-08T23:11:54.730+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:11:55.558 INFO:journalctl@ceph.osd.4.vm11.stdout:Mar 08 23:11:55 vm11 bash[54280]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-4 2026-03-08T23:11:55.577 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:55 vm11 bash[32309]: debug 2026-03-08T23:11:55.182+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:11:56.058 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:55 vm11 bash[38325]: debug 2026-03-08T23:11:55.750+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:11:56.210 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.4.service' 2026-03-08T23:11:56.221 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:11:56.221 INFO:tasks.cephadm.osd.4:Stopped osd.4 2026-03-08T23:11:56.221 INFO:tasks.cephadm.osd.5:Stopping osd.5... 2026-03-08T23:11:56.222 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.5 2026-03-08T23:11:56.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:56 vm11 bash[32309]: debug 2026-03-08T23:11:56.218+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:11:56.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:56 vm11 systemd[1]: Stopping Ceph osd.5 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:11:56.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:56 vm11 bash[32309]: debug 2026-03-08T23:11:56.310+0000 7f36bcee9640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.5 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:11:56.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:56 vm11 bash[32309]: debug 2026-03-08T23:11:56.310+0000 7f36bcee9640 -1 osd.5 71 *** Got signal Terminated *** 2026-03-08T23:11:56.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:56 vm11 bash[32309]: debug 2026-03-08T23:11:56.310+0000 7f36bcee9640 -1 osd.5 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:11:57.055 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:56 vm11 bash[38325]: debug 2026-03-08T23:11:56.778+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:11:57.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:11:57 vm11 bash[44367]: debug 2026-03-08T23:11:57.050+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:11:57.308 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:57 vm11 bash[32309]: debug 2026-03-08T23:11:57.202+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:11:58.100 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:57 vm11 bash[38325]: debug 2026-03-08T23:11:57.810+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:11:58.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:11:58 vm11 bash[44367]: debug 2026-03-08T23:11:58.098+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:11:58.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:58 vm11 bash[32309]: debug 2026-03-08T23:11:58.222+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:11:59.243 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:58 vm11 bash[38325]: debug 2026-03-08T23:11:58.826+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:11:59.243 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:11:59 vm11 bash[44367]: debug 2026-03-08T23:11:59.070+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:11:59.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:59 vm11 bash[32309]: debug 2026-03-08T23:11:59.238+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:11:59.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:11:59 vm11 bash[32309]: debug 2026-03-08T23:11:59.238+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:37.993958+0000 front 2026-03-08T23:11:37.994002+0000 (oldest deadline 2026-03-08T23:11:59.093647+0000) 2026-03-08T23:12:00.058 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:11:59 vm11 bash[38325]: debug 2026-03-08T23:11:59.782+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:00.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:00 vm11 bash[44367]: debug 2026-03-08T23:12:00.082+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:00.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:00 vm11 bash[44367]: debug 2026-03-08T23:12:00.082+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:00.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:12:00 vm11 bash[32309]: debug 2026-03-08T23:12:00.286+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:12:00.558 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:12:00 vm11 bash[32309]: debug 2026-03-08T23:12:00.286+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:37.993958+0000 front 2026-03-08T23:11:37.994002+0000 (oldest deadline 2026-03-08T23:11:59.093647+0000) 2026-03-08T23:12:01.075 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:00 vm11 bash[38325]: debug 2026-03-08T23:12:00.810+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:01.075 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:00 vm11 bash[38325]: debug 2026-03-08T23:12:00.810+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:01.352 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:01 vm11 bash[44367]: debug 2026-03-08T23:12:01.070+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:01.352 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:01 vm11 bash[44367]: debug 2026-03-08T23:12:01.070+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:01.352 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:12:01 vm11 bash[32309]: debug 2026-03-08T23:12:01.302+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:29.193632+0000 front 2026-03-08T23:11:29.193857+0000 (oldest deadline 2026-03-08T23:11:55.093168+0000) 2026-03-08T23:12:01.352 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:12:01 vm11 bash[32309]: debug 2026-03-08T23:12:01.302+0000 7f36b9502640 -1 osd.5 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:37.993958+0000 front 2026-03-08T23:11:37.994002+0000 (oldest deadline 2026-03-08T23:11:59.093647+0000) 2026-03-08T23:12:01.689 INFO:journalctl@ceph.osd.5.vm11.stdout:Mar 08 23:12:01 vm11 bash[54465]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-5 2026-03-08T23:12:01.729 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.5.service' 2026-03-08T23:12:01.739 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:12:01.739 INFO:tasks.cephadm.osd.5:Stopped osd.5 2026-03-08T23:12:01.739 INFO:tasks.cephadm.osd.6:Stopping osd.6... 2026-03-08T23:12:01.739 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.6 2026-03-08T23:12:02.038 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:01 vm11 bash[38325]: debug 2026-03-08T23:12:01.778+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:02.038 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:01 vm11 bash[38325]: debug 2026-03-08T23:12:01.778+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:02.038 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:01 vm11 systemd[1]: Stopping Ceph osd.6 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:02.038 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:01 vm11 bash[38325]: debug 2026-03-08T23:12:01.890+0000 7f4556c00640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.6 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:12:02.038 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:01 vm11 bash[38325]: debug 2026-03-08T23:12:01.890+0000 7f4556c00640 -1 osd.6 71 *** Got signal Terminated *** 2026-03-08T23:12:02.038 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:01 vm11 bash[38325]: debug 2026-03-08T23:12:01.890+0000 7f4556c00640 -1 osd.6 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:12:02.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:02 vm11 bash[44367]: debug 2026-03-08T23:12:02.034+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:02.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:02 vm11 bash[44367]: debug 2026-03-08T23:12:02.034+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:03.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:03 vm11 bash[44367]: debug 2026-03-08T23:12:03.006+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:03.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:03 vm11 bash[44367]: debug 2026-03-08T23:12:03.006+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:03.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:02 vm11 bash[38325]: debug 2026-03-08T23:12:02.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:03.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:02 vm11 bash[38325]: debug 2026-03-08T23:12:02.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:04.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:04 vm11 bash[44367]: debug 2026-03-08T23:12:04.018+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:04.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:04 vm11 bash[44367]: debug 2026-03-08T23:12:04.018+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:04.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:03 vm11 bash[38325]: debug 2026-03-08T23:12:03.806+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:04.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:03 vm11 bash[38325]: debug 2026-03-08T23:12:03.806+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:05.309 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:04 vm11 bash[38325]: debug 2026-03-08T23:12:04.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:05.309 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:04 vm11 bash[38325]: debug 2026-03-08T23:12:04.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:05.309 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:04 vm11 bash[44367]: debug 2026-03-08T23:12:04.986+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:05.309 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:04 vm11 bash[44367]: debug 2026-03-08T23:12:04.986+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:06.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:05 vm11 bash[44367]: debug 2026-03-08T23:12:05.974+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:06.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:05 vm11 bash[44367]: debug 2026-03-08T23:12:05.974+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:06.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:05 vm11 bash[44367]: debug 2026-03-08T23:12:05.974+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:06.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:05 vm11 bash[38325]: debug 2026-03-08T23:12:05.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:06.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:05 vm11 bash[38325]: debug 2026-03-08T23:12:05.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:06.308 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:05 vm11 bash[38325]: debug 2026-03-08T23:12:05.818+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:44.112088+0000 front 2026-03-08T23:11:44.112133+0000 (oldest deadline 2026-03-08T23:12:05.811810+0000) 2026-03-08T23:12:07.197 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:06 vm11 bash[38325]: debug 2026-03-08T23:12:06.830+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:28.311417+0000 front 2026-03-08T23:11:28.311427+0000 (oldest deadline 2026-03-08T23:11:53.610988+0000) 2026-03-08T23:12:07.197 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:06 vm11 bash[38325]: debug 2026-03-08T23:12:06.830+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:34.711620+0000 front 2026-03-08T23:11:34.711682+0000 (oldest deadline 2026-03-08T23:12:00.611375+0000) 2026-03-08T23:12:07.197 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:06 vm11 bash[38325]: debug 2026-03-08T23:12:06.830+0000 7f4553219640 -1 osd.6 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:44.112088+0000 front 2026-03-08T23:11:44.112133+0000 (oldest deadline 2026-03-08T23:12:05.811810+0000) 2026-03-08T23:12:07.197 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 bash[44367]: debug 2026-03-08T23:12:07.014+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:07.197 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 bash[44367]: debug 2026-03-08T23:12:07.014+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:07.197 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 bash[44367]: debug 2026-03-08T23:12:07.014+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:07.523 INFO:journalctl@ceph.osd.6.vm11.stdout:Mar 08 23:12:07 vm11 bash[54642]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-6 2026-03-08T23:12:07.654 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.6.service' 2026-03-08T23:12:07.664 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:12:07.664 INFO:tasks.cephadm.osd.6:Stopped osd.6 2026-03-08T23:12:07.664 INFO:tasks.cephadm.osd.7:Stopping osd.7... 2026-03-08T23:12:07.664 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.7 2026-03-08T23:12:07.808 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 systemd[1]: Stopping Ceph osd.7 for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:07.808 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 bash[44367]: debug 2026-03-08T23:12:07.754+0000 7fe430d35640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.7 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:12:07.808 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 bash[44367]: debug 2026-03-08T23:12:07.754+0000 7fe430d35640 -1 osd.7 71 *** Got signal Terminated *** 2026-03-08T23:12:07.808 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:07 vm11 bash[44367]: debug 2026-03-08T23:12:07.754+0000 7fe430d35640 -1 osd.7 71 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-08T23:12:07.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:12:07 vm11 bash[51823]: ts=2026-03-08T23:12:07.521Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=mgr-prometheus\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:12:07.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:12:07 vm11 bash[51823]: ts=2026-03-08T23:12:07.521Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=node msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=node-exporter\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:12:07.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:12:07 vm11 bash[51823]: ts=2026-03-08T23:12:07.528Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nvmeof msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=nvmeof\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:12:07.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:12:07 vm11 bash[51823]: ts=2026-03-08T23:12:07.528Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=ceph-exporter msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=ceph-exporter\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:12:07.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:12:07 vm11 bash[51823]: ts=2026-03-08T23:12:07.529Z caller=refresh.go:90 level=error component="discovery manager scrape" discovery=http config=nfs msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=nfs\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:12:07.808 INFO:journalctl@ceph.prometheus.a.vm11.stdout:Mar 08 23:12:07 vm11 bash[51823]: ts=2026-03-08T23:12:07.529Z caller=refresh.go:90 level=error component="discovery manager notify" discovery=http config=config-0 msg="Unable to refresh target groups" err="Get \"http://192.168.123.106:8765/sd/prometheus/sd-config?service=alertmanager\": dial tcp 192.168.123.106:8765: connect: connection refused" 2026-03-08T23:12:08.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:08 vm11 bash[44367]: debug 2026-03-08T23:12:08.022+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:08.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:08 vm11 bash[44367]: debug 2026-03-08T23:12:08.022+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:08.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:08 vm11 bash[44367]: debug 2026-03-08T23:12:08.022+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:09.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:08 vm11 bash[44367]: debug 2026-03-08T23:12:08.982+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:09.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:08 vm11 bash[44367]: debug 2026-03-08T23:12:08.982+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:09.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:08 vm11 bash[44367]: debug 2026-03-08T23:12:08.982+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:10.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:10 vm11 bash[44367]: debug 2026-03-08T23:12:10.022+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:10.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:10 vm11 bash[44367]: debug 2026-03-08T23:12:10.022+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:10.308 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:10 vm11 bash[44367]: debug 2026-03-08T23:12:10.022+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:11.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:11 vm11 bash[44367]: debug 2026-03-08T23:12:11.058+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:11.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:11 vm11 bash[44367]: debug 2026-03-08T23:12:11.058+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:11.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:11 vm11 bash[44367]: debug 2026-03-08T23:12:11.058+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:11.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:11 vm11 bash[44367]: debug 2026-03-08T23:12:11.058+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6815 osd.3 since back 2026-03-08T23:11:47.768806+0000 front 2026-03-08T23:11:47.768833+0000 (oldest deadline 2026-03-08T23:12:10.668575+0000) 2026-03-08T23:12:12.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:12 vm11 bash[44367]: debug 2026-03-08T23:12:12.106+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6803 osd.0 since back 2026-03-08T23:11:31.567306+0000 front 2026-03-08T23:11:31.567295+0000 (oldest deadline 2026-03-08T23:11:56.266948+0000) 2026-03-08T23:12:12.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:12 vm11 bash[44367]: debug 2026-03-08T23:12:12.106+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6807 osd.1 since back 2026-03-08T23:11:36.267291+0000 front 2026-03-08T23:11:36.267345+0000 (oldest deadline 2026-03-08T23:11:59.167133+0000) 2026-03-08T23:12:12.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:12 vm11 bash[44367]: debug 2026-03-08T23:12:12.106+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6811 osd.2 since back 2026-03-08T23:11:42.568671+0000 front 2026-03-08T23:11:42.568749+0000 (oldest deadline 2026-03-08T23:12:05.468168+0000) 2026-03-08T23:12:12.558 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:12 vm11 bash[44367]: debug 2026-03-08T23:12:12.106+0000 7fe42cb4d640 -1 osd.7 71 heartbeat_check: no reply from 192.168.123.106:6815 osd.3 since back 2026-03-08T23:11:47.768806+0000 front 2026-03-08T23:11:47.768833+0000 (oldest deadline 2026-03-08T23:12:10.668575+0000) 2026-03-08T23:12:13.058 INFO:journalctl@ceph.osd.7.vm11.stdout:Mar 08 23:12:12 vm11 bash[54826]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-osd-7 2026-03-08T23:12:13.138 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@osd.7.service' 2026-03-08T23:12:13.148 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:12:13.149 INFO:tasks.cephadm.osd.7:Stopped osd.7 2026-03-08T23:12:13.149 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopping rgw.foo.a... 2026-03-08T23:12:13.149 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@rgw.foo.a 2026-03-08T23:12:13.529 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:12:13 vm06 systemd[1]: Stopping Ceph rgw.foo.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:13.529 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:12:13 vm06 bash[53236]: debug 2026-03-08T23:12:13.187+0000 7fb8b9c7c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/radosgw -n client.rgw.foo.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-08T23:12:13.529 INFO:journalctl@ceph.rgw.foo.a.vm06.stdout:Mar 08 23:12:13 vm06 bash[53236]: debug 2026-03-08T23:12:13.187+0000 7fb8bd4eb980 -1 shutting down 2026-03-08T23:12:23.269 DEBUG:teuthology.orchestra.run.vm06:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@rgw.foo.a.service' 2026-03-08T23:12:23.280 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:12:23.280 INFO:tasks.cephadm.ceph.rgw.foo.a:Stopped rgw.foo.a 2026-03-08T23:12:23.280 INFO:tasks.cephadm.prometheus.a:Stopping prometheus.a... 2026-03-08T23:12:23.280 DEBUG:teuthology.orchestra.run.vm11:> sudo systemctl stop ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@prometheus.a 2026-03-08T23:12:23.376 DEBUG:teuthology.orchestra.run.vm11:> sudo pkill -f 'journalctl -f -n 0 -u ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@prometheus.a.service' 2026-03-08T23:12:23.386 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-08T23:12:23.386 INFO:tasks.cephadm.prometheus.a:Stopped prometheus.a 2026-03-08T23:12:23.386 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 --force --keep-logs 2026-03-08T23:12:23.474 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T23:12:28.279 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:28.279 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:28.605 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:28.605 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:28.605 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:28.605 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:28.889 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: Stopping Ceph alertmanager.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:28.889 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 bash[56369]: ts=2026-03-08T23:12:28.684Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..." 2026-03-08T23:12:28.889 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 bash[64970]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-alertmanager-a 2026-03-08T23:12:28.889 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@alertmanager.a.service: Deactivated successfully. 2026-03-08T23:12:28.889 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: Stopped Ceph alertmanager.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:12:29.221 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:29.221 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: Stopping Ceph node-exporter.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:29.221 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:29 vm06 bash[65094]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-node-exporter-a 2026-03-08T23:12:29.221 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:29 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@node-exporter.a.service: Main process exited, code=exited, status=143/n/a 2026-03-08T23:12:29.221 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:29 vm06 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@node-exporter.a.service: Failed with result 'exit-code'. 2026-03-08T23:12:29.221 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:29 vm06 systemd[1]: Stopped Ceph node-exporter.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:12:29.221 INFO:journalctl@ceph.alertmanager.a.vm06.stdout:Mar 08 23:12:28 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:29.491 INFO:journalctl@ceph.node-exporter.a.vm06.stdout:Mar 08 23:12:29 vm06 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:30.897 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 --force --keep-logs 2026-03-08T23:12:30.992 INFO:teuthology.orchestra.run.vm11.stdout:Deleting cluster with fsid: e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T23:12:35.808 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:35 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:35.808 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:35 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:35.808 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:35 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.156 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:35 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.156 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:35 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.156 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:35 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.421 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.421 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.422 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.690 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.690 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.690 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.690 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.691 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:36.691 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:37.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:36 vm11 systemd[1]: Stopping Ceph iscsi.iscsi.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:37.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:36 vm11 bash[48986]: debug Shutdown received 2026-03-08T23:12:47.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:46 vm11 bash[55324]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-iscsi-iscsi-a 2026-03-08T23:12:47.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:46 vm11 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@iscsi.iscsi.a.service: Main process exited, code=exited, status=137/n/a 2026-03-08T23:12:47.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:46 vm11 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@iscsi.iscsi.a.service: Failed with result 'exit-code'. 2026-03-08T23:12:47.058 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:46 vm11 systemd[1]: Stopped Ceph iscsi.iscsi.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:12:47.058 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.058 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.iscsi.iscsi.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.356 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: Stopping Ceph grafana.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 bash[51186]: logger=server t=2026-03-08T23:12:47.391058703Z level=info msg="Shutdown started" reason="System signal: terminated" 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 bash[51186]: logger=ticker t=2026-03-08T23:12:47.391105912Z level=info msg=stopped last_tick=2026-03-08T23:12:40Z 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 bash[51186]: logger=grafana-apiserver t=2026-03-08T23:12:47.391194698Z level=info msg="StorageObjectCountTracker pruner is exiting" 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 bash[51186]: logger=tracing t=2026-03-08T23:12:47.391224243Z level=info msg="Closing tracing" 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 bash[51186]: logger=sqlstore.transactions t=2026-03-08T23:12:47.401734685Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 bash[55493]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-grafana-a 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@grafana.a.service: Deactivated successfully. 2026-03-08T23:12:47.612 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: Stopped Ceph grafana.a for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:12:47.912 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.912 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:47.913 INFO:journalctl@ceph.grafana.a.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:48.193 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:47 vm11 systemd[1]: Stopping Ceph node-exporter.b for e2eb96e6-1b41-11f1-83e5-75f1b5373d30... 2026-03-08T23:12:48.193 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:48 vm11 bash[55660]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30-node-exporter-b 2026-03-08T23:12:48.193 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:48 vm11 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@node-exporter.b.service: Main process exited, code=exited, status=143/n/a 2026-03-08T23:12:48.193 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:48 vm11 systemd[1]: ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@node-exporter.b.service: Failed with result 'exit-code'. 2026-03-08T23:12:48.193 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:48 vm11 systemd[1]: Stopped Ceph node-exporter.b for e2eb96e6-1b41-11f1-83e5-75f1b5373d30. 2026-03-08T23:12:48.476 INFO:journalctl@ceph.node-exporter.b.vm11.stdout:Mar 08 23:12:48 vm11 systemd[1]: /etc/systemd/system/ceph-e2eb96e6-1b41-11f1-83e5-75f1b5373d30@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-08T23:12:48.954 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:12:48.963 INFO:teuthology.orchestra.run.vm06.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-08T23:12:48.963 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:12:48.963 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-08T23:12:48.972 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-08T23:12:48.973 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/crash to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289/remote/vm06/crash 2026-03-08T23:12:48.973 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/crash -- . 2026-03-08T23:12:49.014 INFO:teuthology.orchestra.run.vm06.stderr:tar: /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/crash: Cannot open: No such file or directory 2026-03-08T23:12:49.014 INFO:teuthology.orchestra.run.vm06.stderr:tar: Error is not recoverable: exiting now 2026-03-08T23:12:49.015 DEBUG:teuthology.misc:Transferring archived files from vm11:/var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/crash to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289/remote/vm11/crash 2026-03-08T23:12:49.015 DEBUG:teuthology.orchestra.run.vm11:> sudo tar c -f - -C /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/crash -- . 2026-03-08T23:12:49.024 INFO:teuthology.orchestra.run.vm11.stderr:tar: /var/lib/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/crash: Cannot open: No such file or directory 2026-03-08T23:12:49.024 INFO:teuthology.orchestra.run.vm11.stderr:tar: Error is not recoverable: exiting now 2026-03-08T23:12:49.024 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-08T23:12:49.024 DEBUG:teuthology.orchestra.run.vm06:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | head -n 1 2026-03-08T23:12:49.064 INFO:tasks.cephadm:Compressing logs... 2026-03-08T23:12:49.064 DEBUG:teuthology.orchestra.run.vm06:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:12:49.108 DEBUG:teuthology.orchestra.run.vm11:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:12:49.115 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-08T23:12:49.115 INFO:teuthology.orchestra.run.vm06.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-08T23:12:49.115 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.3.log 2026-03-08T23:12:49.116 INFO:teuthology.orchestra.run.vm11.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-08T23:12:49.116 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-08T23:12:49.116 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mgr.x.log 2026-03-08T23:12:49.117 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log 2026-03-08T23:12:49.117 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.b.log 2026-03-08T23:12:49.118 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/cephadm.log: 90.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-08T23:12:49.118 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log 2026-03-08T23:12:49.119 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.c.log 2026-03-08T23:12:49.119 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log: 88.4% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log.gz 2026-03-08T23:12:49.120 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.3.log: /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log: 93.5% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.log.gz 2026-03-08T23:12:49.121 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.1.log 2026-03-08T23:12:49.121 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.5.log 2026-03-08T23:12:49.125 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.b.log: /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mgr.x.log: 90.7% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-08T23:12:49.125 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.7.log 2026-03-08T23:12:49.127 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mgr.y.log 2026-03-08T23:12:49.129 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.5.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.6.log 2026-03-08T23:12:49.137 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.7.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.audit.log 2026-03-08T23:12:49.149 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.6.log: 89.6%gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-volume.log 2026-03-08T23:12:49.149 INFO:teuthology.orchestra.run.vm11.stderr: -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mgr.x.log.gz 2026-03-08T23:12:49.151 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.a.log 2026-03-08T23:12:49.158 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.audit.log: 90.4%gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.cephadm.log 2026-03-08T23:12:49.161 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-volume.log: -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.audit.log.gz 2026-03-08T23:12:49.165 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.4.log 2026-03-08T23:12:49.166 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.cephadm.log: 82.7% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.cephadm.log.gz 2026-03-08T23:12:49.168 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mgr.y.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.2.log 2026-03-08T23:12:49.171 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.audit.log 2026-03-08T23:12:49.173 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/tcmu-runner.log 2026-03-08T23:12:49.180 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-volume.log 2026-03-08T23:12:49.182 INFO:teuthology.orchestra.run.vm11.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.4.log: /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/tcmu-runner.log: 96.1% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-volume.log.gz 2026-03-08T23:12:49.182 INFO:teuthology.orchestra.run.vm11.stderr: 73.2% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/tcmu-runner.log.gz 2026-03-08T23:12:49.192 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.audit.log: 94.1%gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-client.rgw.foo.a.log 2026-03-08T23:12:49.192 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-volume.log: -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.audit.log.gz 2026-03-08T23:12:49.195 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.cephadm.log 2026-03-08T23:12:49.196 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-client.rgw.foo.a.log: 59.5% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-client.rgw.foo.a.log.gz 2026-03-08T23:12:49.204 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.0.log 2026-03-08T23:12:49.208 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.cephadm.log: 90.0% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph.cephadm.log.gz 2026-03-08T23:12:49.240 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.0.log: 96.1% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-volume.log.gz 2026-03-08T23:12:49.345 INFO:teuthology.orchestra.run.vm11.stderr: 92.3% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.b.log.gz 2026-03-08T23:12:49.520 INFO:teuthology.orchestra.run.vm06.stderr: 89.7% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mgr.y.log.gz 2026-03-08T23:12:49.644 INFO:teuthology.orchestra.run.vm06.stderr: 92.4% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.c.log.gz 2026-03-08T23:12:49.649 INFO:teuthology.orchestra.run.vm11.stderr: 93.1% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.6.log.gz 2026-03-08T23:12:49.690 INFO:teuthology.orchestra.run.vm11.stderr: 93.4% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.4.log.gz 2026-03-08T23:12:49.719 INFO:teuthology.orchestra.run.vm11.stderr: 93.4% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.5.log.gz 2026-03-08T23:12:49.745 INFO:teuthology.orchestra.run.vm11.stderr: 93.6% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.7.log.gz 2026-03-08T23:12:49.747 INFO:teuthology.orchestra.run.vm11.stderr: 2026-03-08T23:12:49.747 INFO:teuthology.orchestra.run.vm11.stderr:real 0m0.636s 2026-03-08T23:12:49.747 INFO:teuthology.orchestra.run.vm11.stderr:user 0m1.171s 2026-03-08T23:12:49.747 INFO:teuthology.orchestra.run.vm11.stderr:sys 0m0.065s 2026-03-08T23:12:49.784 INFO:teuthology.orchestra.run.vm06.stderr: 93.0% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.2.log.gz 2026-03-08T23:12:49.863 INFO:teuthology.orchestra.run.vm06.stderr: 93.3% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.1.log.gz 2026-03-08T23:12:49.890 INFO:teuthology.orchestra.run.vm06.stderr: 93.2% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.3.log.gz 2026-03-08T23:12:49.908 INFO:teuthology.orchestra.run.vm06.stderr: 93.3% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-osd.0.log.gz 2026-03-08T23:12:49.991 INFO:teuthology.orchestra.run.vm06.stderr: 91.2% -- replaced with /var/log/ceph/e2eb96e6-1b41-11f1-83e5-75f1b5373d30/ceph-mon.a.log.gz 2026-03-08T23:12:49.992 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-08T23:12:49.992 INFO:teuthology.orchestra.run.vm06.stderr:real 0m0.883s 2026-03-08T23:12:49.992 INFO:teuthology.orchestra.run.vm06.stderr:user 0m1.563s 2026-03-08T23:12:49.992 INFO:teuthology.orchestra.run.vm06.stderr:sys 0m0.110s 2026-03-08T23:12:49.992 INFO:tasks.cephadm:Archiving logs... 2026-03-08T23:12:49.993 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/log/ceph to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289/remote/vm06/log 2026-03-08T23:12:49.993 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-08T23:12:50.120 DEBUG:teuthology.misc:Transferring archived files from vm11:/var/log/ceph to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289/remote/vm11/log 2026-03-08T23:12:50.120 DEBUG:teuthology.orchestra.run.vm11:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-08T23:12:50.188 INFO:tasks.cephadm:Removing cluster... 2026-03-08T23:12:50.188 DEBUG:teuthology.orchestra.run.vm06:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 --force 2026-03-08T23:12:50.281 INFO:teuthology.orchestra.run.vm06.stdout:Deleting cluster with fsid: e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T23:12:51.521 DEBUG:teuthology.orchestra.run.vm11:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid e2eb96e6-1b41-11f1-83e5-75f1b5373d30 --force 2026-03-08T23:12:51.611 INFO:teuthology.orchestra.run.vm11.stdout:Deleting cluster with fsid: e2eb96e6-1b41-11f1-83e5-75f1b5373d30 2026-03-08T23:12:52.854 INFO:tasks.cephadm:Removing cephadm ... 2026-03-08T23:12:52.855 DEBUG:teuthology.orchestra.run.vm06:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-08T23:12:52.860 DEBUG:teuthology.orchestra.run.vm11:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-08T23:12:52.864 INFO:tasks.cephadm:Teardown complete 2026-03-08T23:12:52.864 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-08T23:12:52.867 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-08T23:12:52.867 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-08T23:12:52.904 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-08T23:12:52.922 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-08T23:12:52.922 DEBUG:teuthology.orchestra.run.vm06:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-08T23:12:52.928 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-08T23:12:52.928 DEBUG:teuthology.orchestra.run.vm11:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-08T23:12:52.990 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:12:52.991 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:12:53.181 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:12:53.181 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:12:53.200 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:12:53.201 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:12:53.279 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:53.279 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:12:53.279 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T23:12:53.279 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:53.287 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:12:53.287 INFO:teuthology.orchestra.run.vm11.stdout: ceph* 2026-03-08T23:12:53.355 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:53.355 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:12:53.355 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-08T23:12:53.355 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:53.363 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:12:53.363 INFO:teuthology.orchestra.run.vm06.stdout: ceph* 2026-03-08T23:12:53.469 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:12:53.469 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-08T23:12:53.505 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-08T23:12:53.506 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:53.526 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:12:53.526 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-08T23:12:53.627 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-08T23:12:53.628 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:54.661 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:12:54.700 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:12:54.781 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:12:54.816 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:12:54.897 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:12:54.898 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:12:54.980 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:12:54.980 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:12:55.114 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:55.114 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:12:55.115 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-08T23:12:55.115 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:55.132 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:12:55.133 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-cephadm* cephadm* 2026-03-08T23:12:55.191 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:55.191 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:12:55.192 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-08T23:12:55.192 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:55.204 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:12:55.205 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm* cephadm* 2026-03-08T23:12:55.320 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:12:55.320 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-08T23:12:55.358 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-08T23:12:55.360 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:55.380 INFO:teuthology.orchestra.run.vm11.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:55.395 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:12:55.395 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-08T23:12:55.411 INFO:teuthology.orchestra.run.vm11.stdout:Looking for files to backup/remove ... 2026-03-08T23:12:55.412 INFO:teuthology.orchestra.run.vm11.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-08T23:12:55.414 INFO:teuthology.orchestra.run.vm11.stdout:Removing user `cephadm' ... 2026-03-08T23:12:55.414 INFO:teuthology.orchestra.run.vm11.stdout:Warning: group `nogroup' has no more members. 2026-03-08T23:12:55.426 INFO:teuthology.orchestra.run.vm11.stdout:Done. 2026-03-08T23:12:55.435 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-08T23:12:55.437 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:55.452 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:12:55.458 INFO:teuthology.orchestra.run.vm06.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:55.487 INFO:teuthology.orchestra.run.vm06.stdout:Looking for files to backup/remove ... 2026-03-08T23:12:55.488 INFO:teuthology.orchestra.run.vm06.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-08T23:12:55.490 INFO:teuthology.orchestra.run.vm06.stdout:Removing user `cephadm' ... 2026-03-08T23:12:55.490 INFO:teuthology.orchestra.run.vm06.stdout:Warning: group `nogroup' has no more members. 2026-03-08T23:12:55.501 INFO:teuthology.orchestra.run.vm06.stdout:Done. 2026-03-08T23:12:55.525 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:12:55.556 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-08T23:12:55.559 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:55.627 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-08T23:12:55.628 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:56.621 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:12:56.654 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:12:56.669 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:12:56.706 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:12:56.826 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:12:56.826 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:12:56.880 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:12:56.880 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:12:56.931 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:56.931 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:12:56.931 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-08T23:12:56.931 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:56.938 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:12:56.939 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds* 2026-03-08T23:12:56.981 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:56.981 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-08T23:12:56.982 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-08T23:12:56.982 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:56.989 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:12:56.989 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mds* 2026-03-08T23:12:57.118 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:12:57.118 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-08T23:12:57.144 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:12:57.144 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-08T23:12:57.159 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-08T23:12:57.162 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:57.188 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-08T23:12:57.191 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:57.629 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:12:57.661 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:12:57.728 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-08T23:12:57.731 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:57.763 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-08T23:12:57.765 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.189 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:12:59.208 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:12:59.224 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:12:59.241 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:12:59.369 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:12:59.370 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:12:59.382 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:12:59.383 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev 2026-03-08T23:12:59.472 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:59.479 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:12:59.479 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-08T23:12:59.479 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-k8sevents* 2026-03-08T23:12:59.519 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:12:59.519 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout: sg3-utils-udev 2026-03-08T23:12:59.520 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:12:59.531 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:12:59.531 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-08T23:12:59.532 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-k8sevents* 2026-03-08T23:12:59.633 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-08T23:12:59.633 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 165 MB disk space will be freed. 2026-03-08T23:12:59.674 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-08T23:12:59.676 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.690 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.706 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-08T23:12:59.706 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 165 MB disk space will be freed. 2026-03-08T23:12:59.722 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.743 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-08T23:12:59.745 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.756 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.763 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.784 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:12:59.823 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:00.279 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-08T23:13:00.280 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:00.324 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-08T23:13:00.326 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:01.787 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:01.821 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:01.938 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:01.971 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:02.021 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:02.022 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:02.177 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:02.178 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:02.186 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:02.186 INFO:teuthology.orchestra.run.vm11.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-08T23:13:02.189 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:02.190 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:02.343 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:02.352 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:13:02.352 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 472 MB disk space will be freed. 2026-03-08T23:13:02.355 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:02.356 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-08T23:13:02.387 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-08T23:13:02.389 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:02.449 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:02.535 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:13:02.536 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 472 MB disk space will be freed. 2026-03-08T23:13:02.580 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-08T23:13:02.582 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:02.646 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:02.889 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:03.129 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:03.336 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:03.552 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:03.850 INFO:teuthology.orchestra.run.vm11.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:04.025 INFO:teuthology.orchestra.run.vm06.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:04.306 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:04.364 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:04.467 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:04.508 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:04.893 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:04.934 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:05.008 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:05.018 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-08T23:13:05.021 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:05.049 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:05.136 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-08T23:13:05.139 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:05.665 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:05.810 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:06.375 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:06.376 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:06.816 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:06.844 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:07.265 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:07.296 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:08.740 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:08.783 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:08.924 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:08.962 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:08.963 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:08.963 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:09.172 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:09.172 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:09.172 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:09.173 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:09.189 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:09.190 INFO:teuthology.orchestra.run.vm11.stdout: ceph-fuse* 2026-03-08T23:13:09.197 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:09.197 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:09.393 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:09.393 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:09.394 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:09.395 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:13:09.395 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-08T23:13:09.408 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:09.409 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse* 2026-03-08T23:13:09.442 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-08T23:13:09.444 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:09.724 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:13:09.724 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-08T23:13:09.764 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-08T23:13:09.766 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:09.895 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:09.993 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-08T23:13:09.995 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:10.192 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:10.294 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-08T23:13:10.297 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:11.538 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:11.571 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:11.775 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:11.776 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:11.776 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:11.815 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:11.918 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:11.934 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:11.934 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:11.936 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:11.937 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:11.969 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:12.111 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:12.112 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:12.112 INFO:teuthology.orchestra.run.vm06.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:12.112 INFO:teuthology.orchestra.run.vm06.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:12.112 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:12.112 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:12.137 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:12.137 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:12.171 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:12.183 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:12.184 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:12.314 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:12.315 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:12.328 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:12.329 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:12.361 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:12.371 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:12.371 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:12.539 INFO:teuthology.orchestra.run.vm06.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-08T23:13:12.539 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:12.540 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:12.559 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:12.559 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:12.579 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:12.580 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:12.596 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout:Package 'radosgw' is not installed, so not removed 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:12.697 INFO:teuthology.orchestra.run.vm11.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:12.698 INFO:teuthology.orchestra.run.vm11.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:12.698 INFO:teuthology.orchestra.run.vm11.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:12.698 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:12.717 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:12.717 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:12.750 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:12.804 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:12.805 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:12.866 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:12.866 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:12.965 INFO:teuthology.orchestra.run.vm11.stdout: xmlstarlet zip 2026-03-08T23:13:12.966 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:12.973 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:12.973 INFO:teuthology.orchestra.run.vm11.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-08T23:13:12.995 INFO:teuthology.orchestra.run.vm06.stdout:Package 'radosgw' is not installed, so not removed 2026-03-08T23:13:12.995 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:12.995 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:12.995 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-08T23:13:12.996 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:13.017 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:13.017 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:13.051 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:13.132 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-08T23:13:13.132 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-08T23:13:13.169 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-08T23:13:13.171 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:13.183 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:13.193 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:13.272 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:13.273 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:13.506 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:13.506 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:13.506 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:13.506 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet zip 2026-03-08T23:13:13.507 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:13.523 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:13.523 INFO:teuthology.orchestra.run.vm06.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-08T23:13:13.707 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-08T23:13:13.707 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-08T23:13:13.743 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-08T23:13:13.745 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:13.756 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:13.769 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:14.418 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:14.450 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:14.672 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:14.672 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:14.845 INFO:teuthology.orchestra.run.vm11.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-08T23:13:14.845 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:14.846 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:14.846 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:14.846 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout: xmlstarlet zip 2026-03-08T23:13:14.847 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:14.869 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:14.869 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:14.901 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:14.993 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:15.029 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:15.104 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:15.105 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:15.256 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:15.256 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:15.324 INFO:teuthology.orchestra.run.vm11.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-08T23:13:15.324 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:15.324 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:15.324 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:15.324 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:15.325 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:15.326 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:15.326 INFO:teuthology.orchestra.run.vm11.stdout: xmlstarlet zip 2026-03-08T23:13:15.326 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:15.351 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:15.351 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:15.384 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:15.464 INFO:teuthology.orchestra.run.vm06.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-08T23:13:15.464 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:15.464 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:15.464 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:15.464 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet zip 2026-03-08T23:13:15.465 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:15.482 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:15.482 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:15.516 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:15.541 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:15.541 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:15.670 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:15.670 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:15.773 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:15.773 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:15.773 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:15.774 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:15.774 INFO:teuthology.orchestra.run.vm11.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout: xmlstarlet zip 2026-03-08T23:13:15.775 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:15.797 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:15.797 INFO:teuthology.orchestra.run.vm11.stdout: python3-rbd* 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:15.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet zip 2026-03-08T23:13:15.841 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:15.854 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:15.855 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:15.888 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:15.991 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:13:15.991 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-08T23:13:16.027 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-08T23:13:16.029 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:16.088 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:16.089 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:16.196 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:16.196 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet zip 2026-03-08T23:13:16.197 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:16.204 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:16.205 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd* 2026-03-08T23:13:16.382 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-08T23:13:16.382 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-08T23:13:16.419 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-08T23:13:16.421 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:17.194 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:17.230 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:17.426 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:17.426 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:17.537 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:17.576 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:17.644 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:17.644 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:17.645 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:17.645 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:17.645 INFO:teuthology.orchestra.run.vm11.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:17.645 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:17.645 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout: xmlstarlet zip 2026-03-08T23:13:17.646 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:17.661 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:17.662 INFO:teuthology.orchestra.run.vm11.stdout: libcephfs-dev* libcephfs2* 2026-03-08T23:13:17.790 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:17.790 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:17.857 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:13:17.857 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-08T23:13:17.907 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-08T23:13:17.909 INFO:teuthology.orchestra.run.vm11.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:17.919 INFO:teuthology.orchestra.run.vm11.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:17.942 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:17.977 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:17.977 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:17.977 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:17.977 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet zip 2026-03-08T23:13:17.978 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:17.992 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:17.992 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-dev* libcephfs2* 2026-03-08T23:13:18.184 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-08T23:13:18.185 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-08T23:13:18.218 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-08T23:13:18.220 INFO:teuthology.orchestra.run.vm06.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:18.237 INFO:teuthology.orchestra.run.vm06.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:18.262 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:19.051 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:19.086 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:19.284 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:19.285 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:19.328 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:19.366 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:19.410 INFO:teuthology.orchestra.run.vm11.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-08T23:13:19.410 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:19.410 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:19.410 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:19.410 INFO:teuthology.orchestra.run.vm11.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:19.410 INFO:teuthology.orchestra.run.vm11.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout: xmlstarlet zip 2026-03-08T23:13:19.411 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:19.430 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:19.430 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:19.461 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:19.570 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:19.570 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:19.692 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:19.692 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:19.740 INFO:teuthology.orchestra.run.vm06.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-08T23:13:19.740 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:19.740 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:19.741 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout: xmlstarlet zip 2026-03-08T23:13:19.742 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:19.773 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:19.774 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:19.810 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:19.942 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:19.942 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:19.942 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:19.942 INFO:teuthology.orchestra.run.vm11.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:19.942 INFO:teuthology.orchestra.run.vm11.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:19.943 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:19.955 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:19.955 INFO:teuthology.orchestra.run.vm11.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-08T23:13:19.955 INFO:teuthology.orchestra.run.vm11.stdout: qemu-block-extra* rbd-fuse* 2026-03-08T23:13:20.036 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:20.036 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:20.142 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:13:20.142 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-08T23:13:20.184 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-08T23:13:20.186 INFO:teuthology.orchestra.run.vm11.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.196 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:20.196 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:20.196 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:20.196 INFO:teuthology.orchestra.run.vm06.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:20.196 INFO:teuthology.orchestra.run.vm06.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:20.197 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:20.210 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:20.210 INFO:teuthology.orchestra.run.vm06.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-08T23:13:20.210 INFO:teuthology.orchestra.run.vm06.stdout: qemu-block-extra* rbd-fuse* 2026-03-08T23:13:20.212 INFO:teuthology.orchestra.run.vm11.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.226 INFO:teuthology.orchestra.run.vm11.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.245 INFO:teuthology.orchestra.run.vm11.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:13:20.395 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-08T23:13:20.396 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-08T23:13:20.429 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-08T23:13:20.431 INFO:teuthology.orchestra.run.vm06.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.447 INFO:teuthology.orchestra.run.vm06.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.467 INFO:teuthology.orchestra.run.vm06.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.486 INFO:teuthology.orchestra.run.vm06.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:13:20.683 INFO:teuthology.orchestra.run.vm11.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.703 INFO:teuthology.orchestra.run.vm11.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.723 INFO:teuthology.orchestra.run.vm11.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.753 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:20.824 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:20.906 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-08T23:13:20.909 INFO:teuthology.orchestra.run.vm11.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:13:20.946 INFO:teuthology.orchestra.run.vm06.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.965 INFO:teuthology.orchestra.run.vm06.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:20.982 INFO:teuthology.orchestra.run.vm06.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:21.022 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:21.070 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:21.163 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-08T23:13:21.166 INFO:teuthology.orchestra.run.vm06.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-08T23:13:22.595 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:22.628 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:22.835 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:22.843 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:22.843 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:22.872 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:23.051 INFO:teuthology.orchestra.run.vm11.stdout:Package 'librbd1' is not installed, so not removed 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:23.052 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:23.053 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:23.053 INFO:teuthology.orchestra.run.vm11.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:23.053 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:23.075 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:23.075 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:23.089 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:23.089 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:23.107 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout:Package 'librbd1' is not installed, so not removed 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:23.259 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:23.260 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:23.283 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:23.283 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:23.315 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:23.335 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:23.336 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:23.489 INFO:teuthology.orchestra.run.vm11.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:23.490 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:23.491 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:23.491 INFO:teuthology.orchestra.run.vm11.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:23.491 INFO:teuthology.orchestra.run.vm11.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:23.512 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:23.512 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:23.514 DEBUG:teuthology.orchestra.run.vm11:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-08T23:13:23.529 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:23.530 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:23.570 DEBUG:teuthology.orchestra.run.vm11:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-08T23:13:23.649 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:23.724 INFO:teuthology.orchestra.run.vm06.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:23.725 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-08T23:13:23.752 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-08T23:13:23.752 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:23.754 DEBUG:teuthology.orchestra.run.vm06:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-08T23:13:23.808 DEBUG:teuthology.orchestra.run.vm06:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-08T23:13:23.870 INFO:teuthology.orchestra.run.vm11.stdout:Building dependency tree... 2026-03-08T23:13:23.870 INFO:teuthology.orchestra.run.vm11.stdout:Reading state information... 2026-03-08T23:13:23.885 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:24.041 INFO:teuthology.orchestra.run.vm11.stdout:The following packages will be REMOVED: 2026-03-08T23:13:24.041 INFO:teuthology.orchestra.run.vm11.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:24.041 INFO:teuthology.orchestra.run.vm11.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:24.041 INFO:teuthology.orchestra.run.vm11.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:24.041 INFO:teuthology.orchestra.run.vm11.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:24.042 INFO:teuthology.orchestra.run.vm11.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:24.043 INFO:teuthology.orchestra.run.vm11.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:24.106 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-08T23:13:24.106 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-08T23:13:24.233 INFO:teuthology.orchestra.run.vm11.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-08T23:13:24.233 INFO:teuthology.orchestra.run.vm11.stdout:After this operation, 107 MB disk space will be freed. 2026-03-08T23:13:24.283 INFO:teuthology.orchestra.run.vm11.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-08T23:13:24.287 INFO:teuthology.orchestra.run.vm11.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:24.312 INFO:teuthology.orchestra.run.vm11.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:13:24.330 INFO:teuthology.orchestra.run.vm11.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:13:24.362 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-08T23:13:24.362 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-08T23:13:24.362 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-08T23:13:24.362 INFO:teuthology.orchestra.run.vm06.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-08T23:13:24.362 INFO:teuthology.orchestra.run.vm06.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-08T23:13:24.363 INFO:teuthology.orchestra.run.vm06.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-08T23:13:24.364 INFO:teuthology.orchestra.run.vm06.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-08T23:13:24.370 INFO:teuthology.orchestra.run.vm11.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:13:24.389 INFO:teuthology.orchestra.run.vm11.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:13:24.402 INFO:teuthology.orchestra.run.vm11.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:13:24.421 INFO:teuthology.orchestra.run.vm11.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:13:24.445 INFO:teuthology.orchestra.run.vm11.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:13:24.483 INFO:teuthology.orchestra.run.vm11.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:13:24.526 INFO:teuthology.orchestra.run.vm11.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:13:24.539 INFO:teuthology.orchestra.run.vm11.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:13:24.566 INFO:teuthology.orchestra.run.vm11.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:24.574 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-08T23:13:24.574 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 107 MB disk space will be freed. 2026-03-08T23:13:24.578 INFO:teuthology.orchestra.run.vm11.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:24.626 INFO:teuthology.orchestra.run.vm11.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:24.640 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-08T23:13:24.642 INFO:teuthology.orchestra.run.vm06.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:24.675 INFO:teuthology.orchestra.run.vm11.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:24.713 INFO:teuthology.orchestra.run.vm06.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-08T23:13:24.757 INFO:teuthology.orchestra.run.vm11.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-08T23:13:24.774 INFO:teuthology.orchestra.run.vm06.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-08T23:13:24.806 INFO:teuthology.orchestra.run.vm11.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:13:24.832 INFO:teuthology.orchestra.run.vm06.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:13:24.860 INFO:teuthology.orchestra.run.vm11.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:13:24.912 INFO:teuthology.orchestra.run.vm06.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-08T23:13:24.922 INFO:teuthology.orchestra.run.vm11.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:13:24.959 INFO:teuthology.orchestra.run.vm06.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-08T23:13:24.979 INFO:teuthology.orchestra.run.vm11.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:13:25.012 INFO:teuthology.orchestra.run.vm06.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:13:25.020 INFO:teuthology.orchestra.run.vm11.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-08T23:13:25.068 INFO:teuthology.orchestra.run.vm06.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:13:25.079 INFO:teuthology.orchestra.run.vm11.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:13:25.118 INFO:teuthology.orchestra.run.vm06.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-08T23:13:25.120 INFO:teuthology.orchestra.run.vm11.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:13:25.163 INFO:teuthology.orchestra.run.vm11.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:13:25.174 INFO:teuthology.orchestra.run.vm06.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-08T23:13:25.218 INFO:teuthology.orchestra.run.vm11.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-08T23:13:25.228 INFO:teuthology.orchestra.run.vm06.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-08T23:13:25.278 INFO:teuthology.orchestra.run.vm11.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:13:25.292 INFO:teuthology.orchestra.run.vm06.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:25.323 INFO:teuthology.orchestra.run.vm11.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:13:25.358 INFO:teuthology.orchestra.run.vm06.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:25.374 INFO:teuthology.orchestra.run.vm11.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-08T23:13:25.464 INFO:teuthology.orchestra.run.vm11.stdout:update-initramfs: deferring update (trigger activated) 2026-03-08T23:13:25.477 INFO:teuthology.orchestra.run.vm06.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:25.499 INFO:teuthology.orchestra.run.vm11.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-08T23:13:25.555 INFO:teuthology.orchestra.run.vm06.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-08T23:13:25.566 INFO:teuthology.orchestra.run.vm11.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-08T23:13:25.619 INFO:teuthology.orchestra.run.vm11.stdout:Removing lua-any (27ubuntu1) ... 2026-03-08T23:13:25.620 INFO:teuthology.orchestra.run.vm06.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-08T23:13:25.660 INFO:teuthology.orchestra.run.vm11.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:13:25.686 INFO:teuthology.orchestra.run.vm06.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-08T23:13:25.716 INFO:teuthology.orchestra.run.vm11.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:13:25.726 INFO:teuthology.orchestra.run.vm06.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-08T23:13:25.774 INFO:teuthology.orchestra.run.vm11.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:13:25.793 INFO:teuthology.orchestra.run.vm06.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-08T23:13:25.808 INFO:teuthology.orchestra.run.vm11.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:13:25.824 INFO:teuthology.orchestra.run.vm06.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-08T23:13:25.858 INFO:teuthology.orchestra.run.vm06.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-08T23:13:25.875 INFO:teuthology.orchestra.run.vm06.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-08T23:13:25.898 INFO:teuthology.orchestra.run.vm06.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-08T23:13:25.919 INFO:teuthology.orchestra.run.vm06.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-08T23:13:25.932 INFO:teuthology.orchestra.run.vm06.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-08T23:13:25.952 INFO:teuthology.orchestra.run.vm06.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-08T23:13:25.974 INFO:teuthology.orchestra.run.vm06.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-08T23:13:25.987 INFO:teuthology.orchestra.run.vm06.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-08T23:13:25.996 INFO:teuthology.orchestra.run.vm06.stdout:update-initramfs: deferring update (trigger activated) 2026-03-08T23:13:26.012 INFO:teuthology.orchestra.run.vm06.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-08T23:13:26.052 INFO:teuthology.orchestra.run.vm06.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-08T23:13:26.076 INFO:teuthology.orchestra.run.vm06.stdout:Removing lua-any (27ubuntu1) ... 2026-03-08T23:13:26.089 INFO:teuthology.orchestra.run.vm06.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-08T23:13:26.103 INFO:teuthology.orchestra.run.vm06.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-08T23:13:26.136 INFO:teuthology.orchestra.run.vm06.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-08T23:13:26.158 INFO:teuthology.orchestra.run.vm06.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-08T23:13:26.275 INFO:teuthology.orchestra.run.vm11.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:13:26.311 INFO:teuthology.orchestra.run.vm11.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:13:26.340 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:13:26.484 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-08T23:13:26.546 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-08T23:13:26.608 INFO:teuthology.orchestra.run.vm06.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-08T23:13:26.611 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:13:26.655 INFO:teuthology.orchestra.run.vm06.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-08T23:13:26.667 INFO:teuthology.orchestra.run.vm11.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:13:26.681 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:13:26.693 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-08T23:13:26.740 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:13:26.764 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-08T23:13:26.877 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-08T23:13:26.944 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-08T23:13:26.999 INFO:teuthology.orchestra.run.vm06.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-08T23:13:27.027 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-08T23:13:27.044 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-08T23:13:27.092 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-08T23:13:27.110 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-08T23:13:27.172 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:27.283 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:27.337 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:13:27.390 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-08T23:13:27.403 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:13:27.449 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-08T23:13:27.455 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:13:27.504 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:13:27.504 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:27.555 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-08T23:13:27.556 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-08T23:13:27.603 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-08T23:13:27.611 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-08T23:13:27.659 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:13:27.674 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-08T23:13:27.738 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-08T23:13:27.739 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:13:27.795 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:13:27.796 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-08T23:13:27.854 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-08T23:13:27.910 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-08T23:13:27.940 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:13:27.972 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-08T23:13:28.025 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-08T23:13:28.025 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-08T23:13:28.085 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-08T23:13:28.085 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:13:28.140 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-08T23:13:28.194 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:13:28.214 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-08T23:13:28.257 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-08T23:13:28.290 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-08T23:13:28.310 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-08T23:13:28.366 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-08T23:13:28.374 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:13:28.419 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-08T23:13:28.426 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:13:28.483 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-08T23:13:28.501 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-08T23:13:28.553 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-08T23:13:28.564 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:13:28.682 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-08T23:13:28.683 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-rsa (4.8-1) ... 2026-03-08T23:13:28.742 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-08T23:13:28.745 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:13:28.804 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:13:28.805 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-08T23:13:28.861 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:13:28.863 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-08T23:13:28.916 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:13:28.919 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-08T23:13:28.946 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:13:28.975 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-rsa (4.8-1) ... 2026-03-08T23:13:29.007 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:13:29.035 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-08T23:13:29.073 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:13:29.091 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-08T23:13:29.199 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:13:29.220 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-08T23:13:29.259 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:13:29.277 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-08T23:13:29.315 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-08T23:13:29.319 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-08T23:13:29.375 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-08T23:13:29.378 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:13:29.432 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-08T23:13:29.439 INFO:teuthology.orchestra.run.vm11.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-08T23:13:29.693 INFO:teuthology.orchestra.run.vm11.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:13:29.695 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-08T23:13:29.716 INFO:teuthology.orchestra.run.vm11.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:13:29.752 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-08T23:13:29.806 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-08T23:13:29.862 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-08T23:13:29.967 INFO:teuthology.orchestra.run.vm06.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-08T23:13:30.076 INFO:teuthology.orchestra.run.vm06.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-08T23:13:30.100 INFO:teuthology.orchestra.run.vm06.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-08T23:13:30.229 INFO:teuthology.orchestra.run.vm11.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:13:30.246 INFO:teuthology.orchestra.run.vm11.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:13:30.267 INFO:teuthology.orchestra.run.vm11.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:13:30.300 INFO:teuthology.orchestra.run.vm11.stdout:Removing zip (3.0-12build2) ... 2026-03-08T23:13:30.329 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:30.345 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:30.476 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:13:30.488 INFO:teuthology.orchestra.run.vm11.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-08T23:13:30.509 INFO:teuthology.orchestra.run.vm11.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-08T23:13:30.559 INFO:teuthology.orchestra.run.vm06.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-08T23:13:30.574 INFO:teuthology.orchestra.run.vm06.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-08T23:13:30.598 INFO:teuthology.orchestra.run.vm06.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-08T23:13:30.620 INFO:teuthology.orchestra.run.vm06.stdout:Removing zip (3.0-12build2) ... 2026-03-08T23:13:30.657 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-08T23:13:30.668 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-08T23:13:30.832 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-08T23:13:30.840 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-08T23:13:30.855 INFO:teuthology.orchestra.run.vm06.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-08T23:13:32.075 INFO:teuthology.orchestra.run.vm11.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-08T23:13:32.075 INFO:teuthology.orchestra.run.vm11.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-08T23:13:32.480 INFO:teuthology.orchestra.run.vm06.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-08T23:13:32.481 INFO:teuthology.orchestra.run.vm06.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-08T23:13:34.258 INFO:teuthology.orchestra.run.vm11.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:34.260 DEBUG:teuthology.parallel:result is None 2026-03-08T23:13:34.652 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-08T23:13:34.655 DEBUG:teuthology.parallel:result is None 2026-03-08T23:13:34.655 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm06.local 2026-03-08T23:13:34.655 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm11.local 2026-03-08T23:13:34.655 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-08T23:13:34.655 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-08T23:13:34.664 DEBUG:teuthology.orchestra.run.vm11:> sudo apt-get update 2026-03-08T23:13:34.703 DEBUG:teuthology.orchestra.run.vm06:> sudo apt-get update 2026-03-08T23:13:34.882 INFO:teuthology.orchestra.run.vm06.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:13:34.888 INFO:teuthology.orchestra.run.vm06.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:13:34.896 INFO:teuthology.orchestra.run.vm06.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:13:34.961 INFO:teuthology.orchestra.run.vm11.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-08T23:13:34.968 INFO:teuthology.orchestra.run.vm11.stdout:Hit:2 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:13:34.970 INFO:teuthology.orchestra.run.vm06.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-08T23:13:34.989 INFO:teuthology.orchestra.run.vm11.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-08T23:13:35.021 INFO:teuthology.orchestra.run.vm11.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-08T23:13:36.045 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-08T23:13:36.058 DEBUG:teuthology.parallel:result is None 2026-03-08T23:13:36.080 INFO:teuthology.orchestra.run.vm11.stdout:Reading package lists... 2026-03-08T23:13:36.092 DEBUG:teuthology.parallel:result is None 2026-03-08T23:13:36.092 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-08T23:13:36.094 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-08T23:13:36.094 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:13:36.096 DEBUG:teuthology.orchestra.run.vm11:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:-ntp2.wup-de.hos 237.17.204.95 2 u 56 64 377 30.971 -0.323 0.228 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:-ntp2.uni-ulm.de 129.69.253.1 2 u 41 64 377 28.005 -1.181 1.182 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:+vps-fra8.orlean 195.145.119.188 2 u 50 64 377 29.687 -1.990 0.257 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:*ntp3.uni-ulm.de 129.69.253.1 2 u 53 64 377 27.151 -1.848 0.294 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:-79.133.44.139 .MBGh. 1 u 43 64 377 20.527 -0.356 1.278 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:-47.ip-51-75-67. 185.248.188.98 2 u 39 64 377 21.194 +0.124 1.194 2026-03-08T23:13:36.292 INFO:teuthology.orchestra.run.vm06.stdout:+185.125.190.57 194.121.207.249 2 u 17 64 377 35.122 -1.991 0.324 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout: remote refid st t when poll reach delay offset jitter 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:============================================================================== 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:+ntp2.uni-ulm.de 129.69.253.1 2 u 57 64 377 27.074 +0.279 0.234 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:-vps-fra8.orlean 195.145.119.188 2 u 52 64 377 33.210 +0.627 0.228 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:-47.ip-51-75-67. 185.248.188.98 2 u 46 128 377 21.182 +1.917 1.073 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:-79.133.44.139 .MBGh. 1 u 45 64 377 20.507 +1.401 1.021 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:+185.125.190.58 145.238.80.80 2 u 9 64 377 35.168 +0.103 0.164 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:-ntp3.uni-ulm.de 129.69.253.1 2 u 41 64 377 27.345 +0.466 1.111 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:*185.125.190.57 194.121.207.249 2 u 18 64 377 35.385 +0.025 0.176 2026-03-08T23:13:36.613 INFO:teuthology.orchestra.run.vm11.stdout:+185.125.190.56 79.243.60.50 2 u 19 64 377 35.358 -0.024 0.176 2026-03-08T23:13:36.613 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-08T23:13:36.616 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-08T23:13:36.616 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-08T23:13:36.618 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-08T23:13:36.622 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-08T23:13:36.624 INFO:teuthology.task.internal:Duration was 1279.620826 seconds 2026-03-08T23:13:36.624 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-08T23:13:36.626 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-08T23:13:36.626 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-08T23:13:36.627 DEBUG:teuthology.orchestra.run.vm11:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-08T23:13:36.653 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-08T23:13:36.653 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-08T23:13:36.653 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-08T23:13:36.702 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm11.local 2026-03-08T23:13:36.702 DEBUG:teuthology.orchestra.run.vm11:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-08T23:13:36.713 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-08T23:13:36.713 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:13:36.744 DEBUG:teuthology.orchestra.run.vm11:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:13:36.826 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-08T23:13:36.826 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:13:36.827 DEBUG:teuthology.orchestra.run.vm11:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T23:13:36.835 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm11.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm11.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm11.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T23:13:36.836 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-08T23:13:36.848 INFO:teuthology.orchestra.run.vm11.stderr: 90.2% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-08T23:13:36.851 INFO:teuthology.orchestra.run.vm06.stderr: 92.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-08T23:13:36.853 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-08T23:13:36.855 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-08T23:13:36.855 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-08T23:13:36.903 DEBUG:teuthology.orchestra.run.vm11:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-08T23:13:36.913 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-08T23:13:36.916 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:13:36.945 DEBUG:teuthology.orchestra.run.vm11:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:13:36.951 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-08T23:13:36.962 INFO:teuthology.orchestra.run.vm11.stdout:kernel.core_pattern = core 2026-03-08T23:13:36.970 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:13:37.006 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:13:37.006 DEBUG:teuthology.orchestra.run.vm11:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-08T23:13:37.016 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T23:13:37.016 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-08T23:13:37.018 INFO:teuthology.task.internal:Transferring archived files... 2026-03-08T23:13:37.019 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289/remote/vm06 2026-03-08T23:13:37.019 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-08T23:13:37.055 DEBUG:teuthology.misc:Transferring archived files from vm11:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-08_22:22:45-orch:cephadm-squid-none-default-vps/289/remote/vm11 2026-03-08T23:13:37.055 DEBUG:teuthology.orchestra.run.vm11:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-08T23:13:37.068 INFO:teuthology.task.internal:Removing archive directory... 2026-03-08T23:13:37.068 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-08T23:13:37.101 DEBUG:teuthology.orchestra.run.vm11:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-08T23:13:37.113 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-08T23:13:37.116 INFO:teuthology.task.internal:Not uploading archives. 2026-03-08T23:13:37.116 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-08T23:13:37.119 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-08T23:13:37.119 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-08T23:13:37.144 DEBUG:teuthology.orchestra.run.vm11:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-08T23:13:37.146 INFO:teuthology.orchestra.run.vm06.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 8 23:13 /home/ubuntu/cephtest 2026-03-08T23:13:37.157 INFO:teuthology.orchestra.run.vm11.stdout: 258067 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 8 23:13 /home/ubuntu/cephtest 2026-03-08T23:13:37.158 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-08T23:13:37.165 INFO:teuthology.run:Summary data: description: orch:cephadm/with-work/{0-distro/ubuntu_22.04 fixed-2 mode/root mon_election/connectivity msgr/async-v2only start tasks/rotate-keys} duration: 1279.6208262443542 flavor: default owner: kyr success: true 2026-03-08T23:13:37.165 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-08T23:13:37.182 INFO:teuthology.run:pass